Do we need new antidotes to protect against the poisoning the supply chain of Generative AI?
Generative AI solutions and tools are being developed at a breakneck pace. Builders everywhere are doing whatever it takes to ship their products. Security left merely an afterthought. There are a myriad of concerns on the surface of these solutions including bias, privacy of information, and harmful output (i.e. hate speech). Looking deeper, issues can manifest within these solutions that are imperceptible. Imagine querying one of these solutions and receiving reasonable output in return. Unless you are a subject matter expert, the validity of the contents are likely to go over your head. You’re prone to instinctively trust the output. In some domains, this is harmless. Using code emitting from one of these solutions that is broken requires you to spend some time sharpening its rough edges. In biotechnology, a misdiagnosis from a generative AI solution might prescribe a medication or a certain procedure that could kill the patient. Generative AI is being built on a shaky foundation that could collapse at any moment. Is anyone paying attention?