Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Educators everywhere are surprised, confused, intrigued, and all three of them are at the same time when it comes to invading AI classrooms. It’s still too early to know how this will pan out. But at some level, we educators are responsible for that ultimate form. That’s because we are the people who define “smart” and “right”.
All educators will evaluate students in one way or another. This could be a test letter or number grade, a specification of thresholds for acceptable homework completion, written assessments based on classroom performance, or countless other means. Whatever that is, our assessment is ultimately compared to the standard of knowledge. In many cases, this means defining what constitutes three things: unacceptable work, acceptable work, and exceptional work. This is important for AI manufacturers. Because you’re in the business of building knowledgeable entities, like educators.
Obviously, AI manufacturers don’t want products that produce unacceptable works (although That’s a lot). And exceptional work is difficult. It’s best to leave it to the experts. Computer models trained to do just one thing, such as chesbots (more details below). This type of “expert systems” operate on a different principle than LLM chatbots.
What modern AI strives for is an acceptable job. Certainly, why did educators define the level of acceptability in students’ work, unless the standards have a certain relationship to personal growth, moral or ethical justice, or economic utility. Is that so? The goal of AI is to meet this standard. In doing so, it definitely covers all industries significantly, as all industries employ educated people according to some standards.
An educator is someone who can decide what is acceptable to AI. This is determined by millions of small decisions you make every day about accepted work and knowledge.
It’s not just the only one who uses this power. Educators in other countries and cultures, and often government oversight, define their own standards that define the success or failure of AI. For example, in China, AI knowledge standards can be much higher than in the US. This is because the definition of acceptable school work is often high, especially in the STEM field. However, China’s education standards say certain topics are off limits, such as the 1989 Tiananmen Square Massacre. in GuardianDeepseek follows this content standard, while Western Chatbots provides an accurate description of these types of events. The point is that many dimensions play a role in defining what constitutes acceptable knowledge.
Maybe this was obvious to others, especially non-educators, but this is just dawn to me. I think the good news is that me and all other educators have a kind of superpower that I have never realised before. If you want to know how AI is ultimately used in a particular culture, look at its educators and educational standards.
***
As I have It’s writtenYouTubers have been so much fun recently, showing that chatbots aren’t playing chess. I have pointed out that since the modern wave of computers, this is ironic Intelligence The “Expert System” computer program was launched in 1997 when it defeated the Human World Chess Champion.
International Master Levy Rozman (aka Gothamchess) used the Deepseek model. He finds it deepseeklike ChatGpt and Bard, they often make illegal moves and play poorly overall. In his game against chatbots, Rozman wins, but given the deep confusion of Deepshek about the rules, it cannot be called a game.
Rosman Next set Deepseek for chatgpt. In this case, both systems use rather standard openings. This can be found on any of thousands of chess websites and books. The game progresses rationally, dozens of movements.
But then the game descends into chaos, where both systems regain the captured pieces, often not knowing where the pieces are on the board. ChatGpt is slightly better, but both models end up being haywire.
Perhaps this is more evidence of my claims about educators. The US and China are midway powers Number of chess grandmas They produce per capita and neither country officially incorporates chess education. In contrast, in 2016, Russia was needed 33 hours All first-year chess studies. If Russian scientists have ever built a globally competitive LLM, I predict it would be better for chess than us or the Chinese model.