That’s pretty good!
I do use LLMs sometimes as a secondary thinking partner, but not a primary one: meaning I like to work through a problem and ask it to provide various perspectives, requesting literally that it pushes back and doesn’t just agree with everything I say (which is their default mode, to please the user as quickly as possible, leading to plausible sounding hallucinations). I feel like you can get more efficient use out of them if you understand how they work, but there’s honestly not a whole lot of education on them to the public, and they’ve been thrust on us by almost every tech company such as Google’s search (tip: type -ai after your search query to skip AI entirely).
In fact, there’s internal memos from Google that states they are looking to increase user time on the search engine = more ad revenue. So this quite literally means they do NOT prioritize giving you the correct answer immediately (whether it’s from the AI summary or search results), which would result in the user getting what they need and closing the page. If it’s too effective, there wouldn’t be as much exposure to ads, so it’s in their financial interest to keep you going in circles as long as possible.
That said, I find of all the AI models, the Google in-search AI summary is one of the worst (perhaps intentionally so). Dedicated LLMs such as ChatGPT/Claude/Gemini are better.
But to some earlier points about how it has helped get to a faster answer, I would play devil’s advocate by saying is it really saving time if there’s a 50/50 chance the answer it provides is completely false? It’s basically a coin flip: it may provide the perfect answer and you’ll be on your way in less than 10 seconds, or it may hallucinate a solution that doesn’t work leading you down endless rabbit holes for an hour, finding yourself asking the question on the forum in the end… Regarding software (including Dorico), I find all of the LLMs have this problem where they literally invent menu options and features that simply do not exist, and that can be way more confusing than helpful.
That said, one AI tool that has become interesting to me is Google’s paid NotebookLM. It’s actually quite good because you effectively train it on your own information (PDFs, books, articles, videos, etc). This means it sticks to only the provided information and there’s less of a likelihood it will cross wires with externally trained (irrelevant) data from the greater internet. Even still I find it most helpful if you ask it to challenge and pick information apart, and even demanding that it prove its sources, which can reduce falsehoods. I’d be curious to feed the entire Dorico .PDF manual to NotebookLM and asking it questions that way - so the information is only trained on the actual manual and not all kinds of other random and misleading stuff.
But even then, I also agree with Lillie and others that the manual is great, and I’d rather get a correct answer 100% of the time (even it it takes a little bit longer) than a take my chances with a 50/50 potential wild-goose chase in search of a solution that doesn’t even exist in the software.
Going back to the very first post in the long thread by @k_b I think that one issue leading to the many simple questions being asked in this forum, is that the Dorico manual seldom pops up high in the Google list when you query something.
It is also relatively seldom that it is directly referenced in the AI answer. The first “finds” in the Google response are nearly always forum contributions. Often with old ones first.
Maybe this has changed since the devious AI strategy @wing described is in place but in the past owner companies of web contents could easily by means of a modest contribution to Google and better indexing of the contents, here the Dorico manual, make sure that this contents and even better the correct chapter in the contents would popup first before any forum contributions when someone asks for a certain simple program functionality.
I’ve been down this rabbit hole before, but for anyone else curious, Cory Doctorow (author of Esh!ttification) and Ed Zitron have written at length about this. It was maybe 6 months ago when I had a “when did Google become this terrible?” moment, LOL, after the AI summary was completely wrong and most of the first page was “Sponsored Results” that I didn’t care about, so I started reading that type of stuff.
I assume most people will think it’s crazy to pay for a search engine, but I switched to Kagi back when I was reading about the esh!ttification of Google Search, and have been happy with it. No AI summaries unless I want them (add a ? to the search), no paid ad results, business model not built on using your data, etc. Not sure I’ll how long I’ll keep it, but it’s currently worth it not to have to wade through all the crap Google serves up at the beginning of any search results.
There are also ad-free no-cost search engines available, like DuckDuckGo or Startpage. Both do not track or store search results
I tried DDG for a while, but thought its search results were kinda terrible too. Searching for dorico music notation software here’s what DDG serves up at the top of the search:
Here’s Kagi:
Here’s Google, LOL:
I take it, then, you’re neither an avid supporter of Google nor a Google supporter of Avid.
I have two browser extensions installed: Ghostery ad blocker and Cookie Auto Delete. With minimum tweaking, I auto save cookies for sites like this forum, Steinberg, etc and auto delete them for everything else. And then Ghostery blocks all of the trackers, unless I allow them either globally or by website.
I can’t remember last time I’ve seen ads on Google home page, on YouTube, any popups etc. One of these days I’ll find the time to migrate my emails and calendars away from Google so that I don’t have to accept the cookies for them any longer.
As for AI, I’ve tried everything and in my experience, Grok Expert has been consistently and significantly ahead in depth and accuracy, followed by Claude and then Gemini. ChatGTP has been the worst in terms of the amount of false information it produces. It also has the most annoying tone and style.
Some things I can’t share, but I have seen specific examples where the input is controlled- that we know precisely what information the (recent) LLM has seen, and verified that it’s response when confronted with slightly different source facts can be to make things up. If you say the party is at 7PM and I say it is at 8, the AI may say with confidence that the party is at 7:30, just to illustrate what AI inference can do. I know that’s a trivial example, but conflicting sources are AI poison that currently has to be fought and it makes “jail breaking” also problematic. Jail breaking is where, despite everything, you can prove the AI is still getting information from other sources outside your theoretically controlled environment. Like if you ask the AI who are the members of the Beatles and it correctly answers, but you know it didn’t get that from the docs you fed it. Again trivial, but illustrates the problem AI would have with uncontrolled conflicting sources.
I somewhat agree with people who say it’s here and has uses. But I absolutely know it is untrustworthy in a controlled environment without first a lot of human work and verification. It needs a team of Lillie’s working with it. And so therefore right now in the wild, I don’t see trust being a reasonable thing to give it. Imagine it training on this forum, and also other forums and…
Oh and just for fun, Google Deep mind demonstrated AI writing at music at NAMM 2026, and another company showed “ethical AI” writing new parts for you.
Oxymoron right there.
The problem is that people aren’t using the search bar.
The search bar only works if you know which keywords to use. If you need to ask a question, there is a strong chance that you don’t know the Dorico’s word for what you need. E.g. “casting off“. AI will understand just by providing a description.


