A few weeks ago, my wife and I made a bet. I said there was no way ChatGPT could reasonably mimic my writing style for a smartwatch review. I actually asked the bot to do this months ago, and the results are hilarious. My husband bets they can ask ChatGPT the exact same thing but get a a lot better result. They said my problem is that I don’t know the right queries to ask to get the answer I want.
To my dismay, they were right. ChatGPT wrote a lot Better ratings gay When my husband asked.
That memory flashed through my mind as I blogged about the Google I/O conferences. This year’s keynote was a two-hour thesis on AI, how it will affect search, and all the ways it can boldly And responsibly make our life better. A lot of it was neat. But I shuddered when Google openly admitted that it’s hard to ask AI the right questions.
During a demo of Duet AI, a series of tools that will live inside Gmail, Docs, and more, Google showed off a feature called Sidekick that can proactively present you with change prompts based on the workspace document you’re working on. In other words, it stimulates You about how to claim He. She by telling you what she can do.
This came up again later in the keynote when Google showed off its new AI search results, called Search Generative Experience (SGE). SGE takes any question you type into the search bar and generates a mini report or “snapshot” at the top of the page. At the bottom of that snapshot are follow-up questions.
As someone whose job it is to ask questions, both shows were troubling. The queries and prompts Google uses on stage look nothing like the questions I type into my search bar. My search queries often read like a toddler’s talk. (They are also usually followed by “Reddit” so I get answers from a non-SEO content mill.) Things like “Bald Dennis BlackBerry is the name of a movie actor”. When I’m looking for something I wrote about Peloton’s 2022 earnings, it pops up at “Site: theverge.com Peloton McCarthy Ship Borrowings.” I rarely look up things like, “What do I do in Paris for the weekend?” I don’t even think about putting things like that up on Google.
I’ll admit that when staring at any kind of generative AI, I don’t know what I’m supposed to do. I can watch the billions of demos, and the blank window still makes fun of me. It’s like I’m back in second grade and my angry teacher just called me over to a question I don’t know the answer to. When I ask something, the results I get are really bad—things that would take longer to make presentable than if I did it myself.
My wife, on the other hand, has taken to AI like a fish to water. After our bet, I watched them play with ChatGPT for a solid hour. What struck me most was how different our prompts and inquiries were. Mine was short, open and wide. My wife left very little room for AI interpretation. They said, “You have to hold it in your hand.” “You have to feed him exactly everything you need.” Their commands and queries are very specific and long and often include reference links or datasets. But even they have to rephrase claims and queries over and over again to get exactly what they’re looking for.
This is just ChatGPT. What Google is doing is a step further. The goal of Duet AI is to pull contextual data from your emails and documents and sense what you need (which is funny because half the time I don’t even know what I need). SGE is designed to answer your questions – even those that don’t have a “correct” answer – and then anticipate what you might ask next. For this more intuitive AI to work, programmers have to make it so that the AI knows what questions to ask the users so that the users, in turn, can ask them the right questions. This means that programmers must know what questions users want answered before they ask them. It gives me a headache thinking about it.
Not to get too philosophical, but you can say that life is all about figuring out the right questions to ask. For me, the most annoying thing about the age of AI is that I don’t think any of us know what we know truly You want an AI. Google says they’re all that were shown on stage at I/O. OpenAI thinks they’re chatbots. Microsoft thinks it’s a really interesting chatbot. But whenever you talk to the average person about AI these days, the question everyone wants answered is simple. How will artificial intelligence change and affect it? for me life?
The problem is that no one, not even the bots, has a good answer for that yet. And I don’t think we’ll get any satisfactory answer until everyone takes the time to rewire their brains to talk to AI more fluently.