AI: When ChatGPT gets dumber and dumber

Bobby Cirus

AI: When ChatGPT gets dumber and dumber

Our columnist actually sees his relationship with ChatGPT as a friendship, but now the useful tool is becoming less and less useful.

A blank notebook and pen lie on a light green background.

If AI threatens to become increasingly dumb, perhaps it would be helpful to think for yourself. Photo: Berit Kessler/Zoonar/imago

AI is great. ChatGPT recently helped me to reject an aggressive debt collection claim from Deutsche Bahn. Zack, who had written a letter of objection, did not pay the 180 euros. Of course, I still had to do a little more work and check everything at the end, but here AI definitely gave me the upper hand. As soon as ChatGPT threatened – sorry! – a registered letter arrived with the lawsuit, with all claims dropped.

I owe a lot to my friend ChatGPT. For example, annoying car rental companies that wanted to harass me with various fines, repatriation costs, and compensation that I didn’t have to pay, were convinced by the legal arguments of Jean Gipitó, whom I affectionately call. My AI friend answers my questions, writes invoices, and speeds up my work tremendously. When I want to learn a new technology or framework, I now ask the AI. Since ChatGPT and I are large in scale, I feel like I can learn new technologies faster and do everything.

Is everything okay? Yes. Unfortunately, I am not ChatGPT’s only friend. And it annoys me. Sometimes I find texts on the Internet and ask myself. Is it you? Did ChatGPT write this text? I am worried about my friend. I am afraid that he will be stupid.

One of the reasons generative AI has gotten so good is because the companies behind it have been compiling vast amounts of new data in recent years. Aside from vexing questions like copyright and data protection, the quality of future AI will be determined by who has the most human-derived data. Companies like Alphabet and OpenAI are already fighting to find all the human data available on the internet and use it as training data.

The problem: The more or less AI-based this data is, the more we are feeding AI data that it has generated itself. In the long run, this will essentially lead to AI mad cow disease. If AI consumes too much synthetic data, it will experience an effect called “model autophagy disorder.” In other words, it will eat itself.

This text is week. From left, our weekly! Every week, wochentaz covers the current world and the world that could be. It is a leftist weekly with a voice, attitude, and a special Taz worldview. You can find new news every Saturday at the kiosk. Of course, you can also subscribe.

Unfortunately, there is little data yet on how much AI-driven content is on the internet. But the truth is that the temptation is great. Anyone who wants to market a product or push a political campaign relies on AI today, and even more so in the future. It’s like destroying the climate. For now, it’s more valuable for individuals to act solely in their own self-interest and use AI to create content, which is to destroy what’s good for us.

But should that really happen? No. We can collectively think about how incredibly awesome all this is right now. For now, we can focus on making rational use of existing AI models. Because despite all the euphoria, the social upheaval is enormous. We humans are the resources that AI needs to survive. Our actions are the lifeblood of these amazing contradiction-writing-DNS-troubleshooting-Linux-Pro and basic command explanation machines.

Now we humans need some time to process all this intelligently, so that we can generate new data and provide high-quality new food for AI. I don’t want you to get more dumb and dear generative AI. So take your time, humans.

Source link

Leave a Comment