Seems like a reasonably-cromulent use-case -- or at least, it fits in with my own uses of LLMs.
I suck at spreadsheets. I know they can do both useful and amazing things, but my daily life does not revolve around spreadsheets and I simply do not understand most of the syntax and operations required to make even fairly basic things work. It requires a lot of time and effort for me to get simple things done with a spreadsheet on the rare occasion that I need to manipulate one.
There are things in life that I am very good at; spreadsheets are simply not amongst them.
But do I know what I want, and I generally even have a ballpark idea of what the results should look like, and how to calculate it by hand [horror]. I just don't always know how to articulate it in a way that LibreOffice or Google Sheets or whatever can understand.
LLMs have helped to bridge that gap for me, but it's a pain in the ass: I have to be very careful with the context that I give the LLM (because garbage in is garbage out).
But in the demo, the LLM has the context already. This skips a ton of preamble setup steps to get the LLM ready to provide potentially-useful work, and moves closer to just making a request and getting the desired output.
Having one unified interface saves even more steps.
I suck at spreadsheets. I know they can do both useful and amazing things, but my daily life does not revolve around spreadsheets and I simply do not understand most of the syntax and operations required to make even fairly basic things work. It requires a lot of time and effort for me to get simple things done with a spreadsheet on the rare occasion that I need to manipulate one.
There are things in life that I am very good at; spreadsheets are simply not amongst them.
But do I know what I want, and I generally even have a ballpark idea of what the results should look like, and how to calculate it by hand [horror]. I just don't always know how to articulate it in a way that LibreOffice or Google Sheets or whatever can understand.
LLMs have helped to bridge that gap for me, but it's a pain in the ass: I have to be very careful with the context that I give the LLM (because garbage in is garbage out).
But in the demo, the LLM has the context already. This skips a ton of preamble setup steps to get the LLM ready to provide potentially-useful work, and moves closer to just making a request and getting the desired output.
Having one unified interface saves even more steps.
(And no, this isn't for everyone.)