Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI and the need to believe



WarningLLM.png

I’ll be honest. When I first started working on a large language model, it felt like magic. It felt like they were thinking about it, not just automation and search. They didn’t just stop the facts, they responded in a way they felt more important than my typical (and often frustrating) trade exchange with search engines. The large-scale language model challenged me, surprised me, and even comforted me.

However, recently I have noticed that I am retreating.

As I delved into the technical aspects – the mechanics of trances, the structure of token generation, and the elegant (still ununderstandable) mathematics beneath the surface, I began to see it differently. And a Recent Articles By Professor Luciano Floridi, the crystallization of that reason was helpful.

He calls it semantic Palaidle. The moment you see the meaning or intention that is empty. See or listen to the face in the clouds personality When you drive to work, in waze. Floridi insists on that when we interact aiI haven’t encountered it Intelligencewe encounter our own reflections and project intelligence onto something that behaves in a certain way.

It hit me. That’s exactly what I’ve felt Feelings However, there are factors that can make the Cognitive Dissonance.

I don’t think it was wrong to feel a connection with these models. But maybe I was a little too generous in what I thought they were actually doing. I spent a lot of time explaining LLMS Cognitive Partner and mirror. And while those ratios are still worth it, I started asking myself whether it was a machine I was thinking of or if I was coming out High-dimensional vector space?

Floridi makes a compelling argument. He explains how this trend of overassigning this meaning is amplified Lonelinessthe power of the market, and the eerie realism of today’s models. And he warns that this could slide from Playful Personification To become a kind of technical idolatry, something more dangerous. It’s a slippery slope because we start to trust, depend on, and then believe.

Honestly, I felt a change in myself. Early on, we found that LLMS was something that resonated emotionally with usefulness. And I think it’s worth paying Note To, not as a mechanical defect, but as a characteristic of our psychology.

What I appreciate most about Floridi’s discussion is balance. He doesn’t want panicbut let’s clarify. This call is intended to develop a design practice that helps us stay grounded and a kind of cognitive literacy that allows users to understand and understand how AI is used.

It is tracked in my own evolution at LLMS. I still believe in the transformational power of these tools. I still think they can help us write, learn, diagnose and create in an extraordinary way. But I’ve become more cautious about my language, but now I’m aiming to critically recognize differences between systems that I hear wise And wise. And while that may be more difficult than it looks, it’s still an important distinction.

So this may not be disillusionment. Maybe it’s just a bit of growth in how we relate to technology.

Floridi’s Semantic Paleidoria idea gives us a useful lens. But it also gives you the opportunity to see AI more clearly and see yourself more honestly.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *