Big Tech is already warning us about AI privacy problems

So Apple restricted OpenAI’s ChatGPT and Microsoft’s Copilot, The Wall Street Journal reports. ChatGPT has been on the block list for months, bloomberg‘s Mark Gorman adds.

Not only Apple, but also Samsung, Verizon in the world of technology and the most important banks (Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, JPMorgan). This is due to the possibility of confidential data escaping; In any case, ChatGPT’s privacy policy explicitly states that your claims can be used to train its models unless you opt out. The fear of leaks is unfounded: In March, a bug in ChatGPT exposed data from other users.

Is there a world where Disney is willing to let Marvel spoilers leak?

I tend to think of this ban as a very loud warning shot.

One obvious use of this technology is in customer service, as companies try to reduce costs. But for customer service to work, customers have to give up their details — sometimes private, sometimes sensitive. How do companies plan to secure their customer service bots?

This is not just a customer service problem. Let’s say Disney decides to let AI — rather than its visual effects departments — write its Marvel movies. Is there a world where Disney is willing to let Marvel spoilers leak?

one of the things Generally It’s true about the tech industry that early-stage companies — like the smaller iteration of Facebook, for example — don’t pay much attention to data security. In this case, it makes sense to limit exposure to sensitive materials, as OpenAI itself suggests. (“Please do not share any sensitive information in your conversations.”) This is not an AI problem.

These big, savvy companies that focus on secrecy are probably paranoid

But I am curious as to whether there are intrinsic problems with AI chatbots. One of the expenses that comes with doing AI is computing. It’s expensive to set up your own data center, but using cloud computing means that your queries are processed on a remote server, where you’re essentially relying on someone else to secure your data. You can see why banks are afraid here – financial data is very sensitive.

On top of unintentional public leaks, there is also the possibility of intentional corporate espionage. At first glance, this seems like a problem in the tech industry — after all, theft of trade secrets is one of the risks here. But the big tech companies have moved into streaming, so I wonder if that’s not a problem also for the creative end of things.

There is always a difference between privacy and usefulness when it comes to tech products. In many cases – for example, on Google and Facebook – users have exchanged their privacy for free products. Google’s Bard states that the queries will be used to “improve and develop Google’s products, services, and machine learning technologies.”

It is possible that these big, shrewd, and focused on secrecy companies are just paranoid and nothing to worry about. But let’s say they are right. If so, I can think of some possibilities for the future of intelligent chatbots. The first is that the AI ​​wave turns out to be just like the metaverse wave: uninitiated. The second is that AI companies are under pressure to reform and clearly define security practices. The third is that every company that wants to use AI has to build its own model or, at the very least, run its own processing processes, which seem expensive and hard to scale. And the fourth is the online privacy nightmare, where your airline (or debt collectors or pharmacies or whoever) regularly leaks your data.

I don’t know how it will end. But if the most security-obsessed companies are shutting down the use of AI, there may be good reason for the rest of us to do so, too.

#Big #Tech #warning #privacy #problems

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top