What is a good question? Part 1
Great questions are often sequenced, concise, and asked at the right time. The best kind of questions also depend on the task. Welcome to an adventure with AI-generated questions.
This is the first in a series of posts about the question model we’re developing at Vnote.ai. As with most challenges, developing a human-level understanding of the problem is the essential first step. I'll explain some of our thinking and early learnings in this post. Future posts will highlight the steps we took to build (and measure!) a model that outperforms GPT4 for our task, but you shouldn’t need to be a data scientist to enjoy the series. Asking great questions is an amazing skill. Read on to hear how questions might work in a world enabled by AI.
Part 1 - What Is a Good Question?
By this point, you’ve probably seen Gemini, Claude, or ChatGPT answer questions in amazing ways. They provide detailed facts, generate clear text, and create images and code. I use these tools every day, but I have a subtle fear of relying too heavily on AI for answers. I don’t want to be that boneless captain in WALL-E, fed with milkshakes, unable to chart a course.
So what do I want? Well, one thing I love is that AI models can ask questions, not just provide results. Asking good questions involves understanding where the user needs to go, but the approach is more collaborative than directive. It requires a level of partnership and trust. A thoughtful intelligence. A kind of active listening. We’re in the early days of this movie. Asking a good question is often harder than providing a reply, but it can be really helpful if it's done right.
What are you trying to do?
Questions are only as good as the result they help you achieve. Executive coaches and therapists often ask open-ended questions to help clients understand their ideas. Textbooks ask questions to make students go through a specific series of steps. Voice-writing needs to ask a different kind of question than either of these to get a good result.
Before going into that, let’s examine the AI models we know best. ChatGPT, Claude, and Gemini all work in the following way.
Users ask questions to large language models. The models have been trained to answer questions, not ask them. The nice part about chatting with AI is that it has a conversational interface. You can evolve the response.
I'm seeing more and more popular models ask a question at the end of their reply. They often state five questions you might want to consider. When operated in voice mode, they sometimes end with something as simple as “do you want to hear more?” It helps the user provide feedback and engage. There's nothing wrong with that! But is this the end-state for the kinds of questions we want to receive from LLM’s?
We often initiate our interactions with AI when we need a result. If you need to talk something out and get the right kind of feedback, you probably want a different kind of interaction. AI models can also serve as active listeners and writing partners if that is what they’re trained to do. The smartest people I know have a sense of when to simply listen.
Voice-writing works like an executive assistant
Imagine a world where you have a “director of communications” who can be there to help you at any time. That person is not likely to give you a long reply as soon as you speak. They know your communication style and a degree of context. You go to them because you want to communicate something right. They listen attentively and ask questions for the parts they need. If they were to help you write up a description of a new project description, the model would look something like this.
That’s what we’re trying to do.
The best questions represent the fastest path. They ask for just enough information to create essential communication. That often requires making you think about something that didn't cross your mind, but it shouldn't be distracting. If a question leads you down a tangent, it is bad! If it makes you quit, that's worse! But if it helps you add essential information for the final product, that’s great. The best set of questions are the ones that help you get to a great written draft without any other major tweaks.
Ask only one question at a time.
One lesson we learned is that it is best to ask only one question at a time. This is especially true with voice-first interfaces. If a system provides a written response, it is easy to show many questions at the same time. Showing multiple questions helps people see the directions they want to take. But when someone verbally states more than one question it is very hard to keep all of them in your head. When you try to verbally reply to more than one thing your answer to the first part inevitably influences your reply to the next thing that was asked. Questions generate mental work. The best system for generating questions takes responsibility for prioritizing what comes first.
Ask short questions
Another lesson we learned was to keep questions short. Executive coaches have a rule that powerful questions have 8 words or less. Writing and executive coaching are different tasks, but the same principles for questions often apply. Short questions are easier to understand. That gives you, as the end user, more space to focus on your ideas.
Since context is often necessary, we provide more information with voice. Users benefit from being told what part of their statements were clear and why a question is being asked. It feels good and it provides important context. But the question itself should be relatively short. It often comes at the end of a verbal response. In our system, it is also shown on the screen. Great questions often make people pause and think in multiple directions. A good user interface should remind them of the essential part that they need to come back to move forward.
It’s ok not to ask anything at all.
As we discuss “what” makes a good question, it is also important to address “when” it should be asked. Sometimes the best question is not asking anything at all. Our goal is to help you clearly communicate your ideas. If you can say it perfectly, that’s great! Making questions optional is ok.
Questionless workflows are especially common when people use voice-writing to draft an email. When you have just had an interaction and know what you want to say, you might be able to quickly state everything you need to get out. If you feel good when you end you can convert your rambles into a clearly worded draft. If the message you were trying to create is less clear, you can always request a question at the end.
Even though our users are not required to answer a question (outside of special circumstances where we know they need to say more), they are willing to engage. 62% of the sessions we logged last week involved a user proactively requesting at least one question. Many users requested 5 to 10 questions before they were comfortable with what they were trying to create. One of the leading indicators for “what makes a question good” is if people want to hear more. There’s almost an addictive power to this, right? If you’re going to give a speech or present a plan to your team, you probably don’t want to sound dumb. If you talk it out with an AI and it can only comes up with questions you’re ready to address, you can walk into your meetings feeling more prepared. If you’re talking out your ideas to create written work, it’s much easier to answer the obvious questions before focusing on lower-level tasks such as spelling, word choice, and the organizational elements that help you convey your ideas.
Evaluate the results
The hardest part of asking good questions is evaluating the results. Rambling replies are often ok as long as they come around at the end. You’ve probably experienced this when talking to a friend. When you ask a good question, they might not know how to reply at first, but they may discover clarity as they speak. The questions that inspire these thoughts are often good, but they’re hard to judge.
In 2017, Google researchers wrote a revolutionary paper called “Attention is all you need.” The paper also introduced the “Transformer” model, the T in GPT. OpenAI’s Co-founder, Greg Brockman, noted that evaluations are emerging as the next big thing. If you want to know how we’re approaching evolutions at Vnote.ai, you’ll need to read out next post.
Stay tuned
If you want to hear more, please comment on this post or reach out on LinkedIn. We’re running a series of workshops for executives, educators, and professionals who want to write online. Join in the discussion to see how voice-writing questions can help you in your daily work.
ps. Many thanks to FeedbackFreak.com for reviewing this article. Their service exceeded my expectations. I recommend checking it out.
Years ago I had a class where they said to figure something out you needed to ask "Why" 5 times. I thought that was very useful, but maybe not as specific as this is suggesting, and with AI's help it seems like the questions could be more specific.