With AI, we can get lots of 'coulds' — including some hallucinated ones. If I ask an LLM what could I do, I expect to get plenty of options. But if I ask what should I do, that's where things come unstuck.
When I talk to my accountant, for example, I want the 'should', not just the 'coulds.' I want them to put their neck on the line, their reputation and their expertise to think about my context and what they know of me, and tell me what I should do.
What powers that ability to provide the should?
- Deep contextual awareness - knowing me, my business, my goals, my aspirations. But also knowing my risk tolerance, my preferences, how I like to communicate... and whether I'm having a bad day or not. Much of this context is invisible to AI because there are no (or very limited) 'digital signals' to provide that context.
- Bonding and trust - how well do you know this person? How many times have they helped you in the past? Do you get along? Do they meet your expectations of someone in their position? All these things, and numerous intangibles, will be contributing to their ability to tell you what you should do, and for you take that seriously.
- Professional licensing? - do we place more trust in professions that are licensed and bonded by a professional body? By regulators? Attorney-client privilege means one is more likely to tell the whole truth to one's lawyer, right? Sure, plenty of people are sharing more than they should with LLMs. But that's not the same as having someone you can trust and a course of recourse if they break that trust.
- Human-level accountability - there is something about knowing the consequences that a person breaking professional trust would suffer if they knowingly led you down the wrong path. Knowing that they have something 'on the line' when they are advising you. There are high stakes: reputation, financial penalities, possible loss of freedom. These things carry weight.