TechTalk Daily
By: Daniel W. Rasmus for Serious Insights
Much of the discussion about generative AI and the future of work focuses on the potential for AI to displace workers. Often vague references suggest that AI will create new jobs as did all the automation of the past. Like previous mechanical and digital automation, there are populations of experts who already understand the human labor required to make automation efficient and sustainable. Computers without programmers could do nothing. Grain combines without mechanics to clean and maintain their mechanisms would soon stall in the vast wheat fields they were intended to harvest.
Generative AI certainly has its inventors, those who know the secrets instilled in algorithms, as much as the algorithms permit. They know, more than the users of generative AI, at least, what mixes of data were used for training.
Once trained, many believe generative AI to be akin to a mythical beast that rises from an abyss. In this case, that abyss is the Internet, both light and dark. Generative AI follows the proclivities of all those who populated those vast repositories with facts, opinions, grievances, poetry, and many other outputs of humanity, written and visual.
To move beyond experiment—to prompt adoption and use by individuals and businesses—generative AI must be trusted. AI that randomly curses, one that reflects discriminatory positions, one that devolves into nonsense, raises doubt about the efficacy of AI as a legitimate partner in business or in life.
So, the developers of Large Language Models (LLM) institute various remediations to the aberrant AI behavior, reigning in its most outlandish assertions. Rather than reflect all of the Internet, LLM makers hone output to avoid culturally sensitive areas, adopt a shareable context, and attempt to curtail the most unconventional outbursts. They do this response management with manual overrides that sit between the LLM and the people or systems making queries.
The key word here is manual. While their may be some automation involved in writing what has become known as Guardrails, the choices about what needs to be guard railed are purely human. No generative AI is self-policing of its aberrant or offensive behavior.
The manual nature of guardrails requires knowledge management. Which guardrails are in place, their content and context, may need to be known to offer comfort to buyers, or to act as a baseline as expectations, politics or other contexts change.
Guardrails are not the only area where knowledge management needs to be applied to generative AI management.
The following list outlines the most important areas where organizations need to apply knowledge management principals to generative AI development and deployment.
If you're interested in the remaining 4 areas where organizations need to apply knowledge management principles to generative AI development and deployment, check out the rest of the article on SeriousInsight.com: 7 Reasons AI Needs Knowledge Management.
About the author:
Daniel W. Rasmus, the author of Listening to the Future, is a strategist and industry analyst who has helped clients put their future in context. Rasmus uses scenarios to analyze trends in society, technology, economics, the environment, and politics in order to discover implications used to develop and refine products, services, and experiences. He leverages this work and methodology for content development, workshops, and for professional development.
Interested in AI? Check here to see what TechTalk AI Impact events are happening in your area.