Future tech professionals are leaving school with knowledge of how to generate quick outputs using AI, but they lack other important skills such as prompt engineering, bias detection, and responsible AI.
Using AI today requires AI literacy, which education nonprofit Digital Promise defines as “the knowledge and skills that enable humans to critically understand, evaluate, and use AI systems and tools to safely and ethically participate in an increasingly digital world.”
Richie Cotton, senior data evangelist at online learning platform DataCamp, considers AI literacy to be like a driver’s license for AI.
“Most people don't need to understand the details of how an engine works, but they do need to know how to drive a car safely on the road,” Cotton says. A worker would need to be able to have a conversation about AI and understand it just well enough to carry on an intelligent conversation, he says.
Close to 9 in 10 leaders rated basic data literacy skills as important or very important in a 2026 State of Data & AI Literacy Report released on Feb. 26 by DataCamp in partnership with YouGov, a market-research and data analytics firm. The report incorporated responses from C-level executives, middle management and senior management in industries such as finance, manufacturing, healthcare and technology. In the study, 57% of the business leaders in the survey said AI literacy had become the most important skill in the workplace in the last year, but less than half of the leaders reported that their organizations offer basic data or AI literacy training.
Although students graduate with AI literacy skills, their learning is not complete when they enter the tech workforce.
“Learning is very much a continuous skill,” Cotton says.
Jonathan Cornelissen, cofounder and CEO at DataCamp, said organizations should offer education that incorporates data and AI into regular workflows.
“Closing this gap at scale requires more than incremental spending on traditional training,” Cornelissen said in a news release. It demands a shift from passive, one-off courses to embedded, role-relevant learning that turns data and AI from tools into daily habits.”
Organizations such as DataCamp offer courses on AI fundamentals and how to write good prompts for AI chatbots.
Mastering Prompt Engineering
AI literacy also entails learning how to prompt a chatbot to give accurate results. Writing prompts into AI tools and getting them to be exact will be essential going forward, according to Rattiner.
“The better you are at writing, the better your output is going to be at the end of the day,” Rattiner says.
Learning prompting requires ongoing practice to become competent rather than just taking a particular course, Cotton suggests.
“It's just something you can practice,” he says.
Prioritizing Critical Thinking and Decision-Making
AI literacy goes beyond the “mechanics” of AI, such as building prompts, to include teaching how to think critically, says Marlen Rattiner, vice president of product management for Turnitin, a developer of academic integrity and learning solutions. He warns against students and tech professionals outsourcing their critical thinking skills to AI.
“It will actually make your skills probably erode over time if you're not going into it critically,” Rattiner says.
Critical thinking includes challenging AI-driven recommendations that team members present. Also ask “what if” questions and simulate consequences that may arise from AI to strengthen critical thinking, the MIT Sloan School of Management advised in a blog post.
The DataCamp survey found that a lack of AI literacy skills leads to poor decision-making. Enterprise leaders may get results from data or AI, but may not know how to turn the data into action, according to Cotton.
“I think often that last step of going from I've got some results to I need to take an action, that's a big gap at the moment,” Cotton says.
Gaining Literacy in Responsible AI and Guardrails
Cotton notes that responsible AI must be taught early on to tech professionals, and that includes learning about the dangers of misusing AI. Organizations should offer AI literacy training during onboarding, Cotton suggests.
Article 5 of the EU Artificial Intelligence Act outlines some prohibited practices of AI, such as deploying AI to exploit human vulnerabilities and introducing services that predict the risk of committing criminal offenses based on profiling a person’s traits and characteristics rather than using human assessment based on “objective and verifiable facts directly linked to a criminal activity.”
Another aspect of AI literacy is knowing how to deal with AI when it produces incorrect information, Cotton says. He noted the failure of a facial recognition system checking his identity against his passport at the airport.
“There was no process there for dealing with AI going wrong,” Cotton recalls. “This is a systemic failure, and management had not thought through what's going on there. … if you’re a manager, you need to understand how you can build systems that deal with AI not being correct.”
Literacy on guardrails is important when organizations are developing chatbots. Safeguards could include preventing chatbots from cursing or writing love letters to a customer, Cotton says.
“This is an essential step in the design process,” Cotton says. He explains that product managers must consider which guardrails to incorporate for each AI use case.
Cotton sees guardrails as particularly relevant to content-creation AI, but they are also important with physical AI as well, such as self-driving cars. That involves knowing when a self-driving car needs a driver in the car as backup.
Spotting bias and false information in AI tools are also a key part of AI literacy. Knowing what’s real and what’s not is critical.
“You constantly need to be able to assess if what you're seeing is even real and if what they're writing is reputable at all,” Rattiner says. “This is an actual skill every person is going to require moving forward.”
Just a few hours of learning about the most common issues around bias should be sufficient, Cotton says.
AI literacy isn't a destination — it's an ongoing practice, and the professionals who treat it that way will be better equipped to navigate whatever comes next.