It could likely also be injected via malicious websites, force-shared google docs etc.
If a unknowing user asks a simple question, and Gemini reaches out to a malicious website for an answer, the prompt could be injected.
Additionally it could be taken out of an email / doc that was previously sent to the innocent user if the user asked Gemini to search their email or docs or something.
Kind of crazy the number of delivery vectors there are for these connected LLMs
If a unknowing user asks a simple question, and Gemini reaches out to a malicious website for an answer, the prompt could be injected.
Additionally it could be taken out of an email / doc that was previously sent to the innocent user if the user asked Gemini to search their email or docs or something.
Kind of crazy the number of delivery vectors there are for these connected LLMs