Main image of article ‘Tech Connects’ Video, Transcript: Laura Baldwin, President of O’Reilly Media

Last week, we had Laura Baldwin, president of O’Reilly Media, as our latest ‘Tech Connects’ podcast guest. We devoted quite a bit of our discussion to how O’Reilly Media, which produces training for tech professionals, is integrating artificial intelligence (AI) into its tech stack and offerings.

Here’s the full video of our discussion for your viewing pleasure:

Here are some key takeaways from the discussion; hopefully you can use some of this advice as you plot your own AI strategy:

  • Focus on upskilling, not necessarily hiring AI engineers: Most companies don't need to hire expensive AI specialists. Instead, they can train their existing workforce to understand and work with pre-built AI tools offered by cloud vendors. 

  • Develop an AI strategy before training: Companies should understand their goals and how AI can be used to achieve them before investing in training. This will help identify the most relevant skills for employees to learn. 

  • Ethical considerations are crucial: Companies should use AI responsibly and transparently. This includes proper attribution of content used to train AI models and ensuring human oversight to prevent errors. 

  • Training should be ongoing and adaptable: The field of AI is constantly evolving, so companies need to provide ongoing training opportunities and encourage experimentation with new tools. 

And here are some excerpts where we dug into everything AI-related; check out the full episode for more!

Q: Where do the biggest gaps exist in terms of companies trying to find workers who are skilled in AI technology? How they are attempting to meet this demand? Where does the demand live? 

It's an interesting question because I think about it a little differently. With companies, you hear all this, “Oh, we need to hire AI engineers. We need to hire AI engineers.” But that's not really what they need, when you take a step back and you think about 6 million companies that are small- to medium-sized businesses, less than 500 employees. The AI engineers that are working at OpenAI and for Gemini at Google, those are the engineers that must understand the depth of Large Language Models (LLMs), that have to understand the super complexities around AI. But for the rest of the companies that are out there, they don't need that skill set. When you think about it, all these companies, most of these cloud vendors, are doing partnerships with these AI companies, right? To bring AI offerings into the cloud, you need people who don't know how to train a model, but they know how to work with AI; they must be able to integrate it with their business.

So, there are plenty of developers out there. Yes, they're still in short supply, but they already have the basic skills they need to work with AI as opposed to on AI. Most companies, even our own customer base, we hear them say, “Oh, we need to hire AI engineers.” Well, what are you trying to do with them, because a lot of those AI tools are available in the cloud? Or there are those AI chatbots that work through APIs that you can integrate with your own work. So, I think people need to take that step back and really understand what the capabilities are out there, and then think about what they're trying to do as an organization and then take the step of: What do we need? 

Q: What's the best way to upskill for AI? As in, what’s the best way to get everyone on this road, training them and making them better at prompt engineering or whatever they need to do?

Before you can hit that step as an organization, you have to have a step in front of that, which is: “What are the tenants of AI that you believe in as an organization”?

I’m going to read one that we wrote for our team: “We believe in the transformative nature of AI to augment jobs, solve large scale problems, and create economic opportunity.” Why do I think that's important? Telling your staff you need to get trained up in AI without an understanding of why or how they're going to apply it is going to create fear and tension: “Oh my gosh, I'm training my replacement.” There's so much hype out there about AI replacing jobs.

If your organization sets that tenet up front, then your people are more likely to be excited about learning it and try to understand it. We've told our team that we want to apply AI ethically and we want their help, and one of the best ways to learn is to play with it, so we opened up Gemini to everybody. We're a Google shop, we opened it up. But we created a set of policies around it for how people could use it and how they needed to make sure that they didn't give away intellectual property that belongs to our authors. So, if you take those steps up front, the training piece is easy if you have a good learning system. I really believe you got to have the right system with the right content, but first, before you can do that, you as a company have to make those AI tenets. AI is going to be a tool and with that tool we want to build X.

If people see that outcome and they know what they're trying to do, it makes it so much easier for them to go into any learning service. There are many good ones out there and learn what they need to learn. So, I think companies are making like, “Okay, we need AI training.” And it's like, “Okay, great. What are you trying to do?” “Well, we don't know yet. That's why we must get AI training.” No AI is a tool that will enable a strategy. You've got to think differently, and I think that's where the hype is really hurting businesses because they're not taking that step of, “What do you believe as an organization? What are you going to use AI to do? What have you always wanted to do that maybe you couldn't that AI can help you with now?”

The team that we have right now working on AI, they’re not AI engineers. They're engineers that have been with us for a while, they used our own platform to train themselves, of course, working with our CTO. They use services like Medium and other places like that. But they've done it on their own because we know what we're trying to build. So, there's an outcome that they're going for and they're trying to see how AI is the tool will help that. It's the mistake I think most companies are making is they're not taking that critical step up front. 

Q: So, you’re aiming before you fire, as opposed to a lot of companies that have adopted a ‘fire ready aim’ mentality. Ethical AI is one of those buzz terms that also gets thrown around. A lot of companies are like, “Oh, we need we need ethical AI,” which is great—obviously AI should be ethical. But how do you proceed as a principle with ethical AI? 

The basis of ethical AI, of course, is making sure that it's right. I still think there needs to be human in the loop around AI; the hallucination sometimes is fairly bad. So, I think that's part of it, but part of it is how you're going to use it and that's the part about we believe it can augment your job instead of taking your job away. We have thousands of content creators: the best minds in the technical world produce our content. For us, ethical AI is making sure that the models that we're putting in place have a way to pay [creators] for their work, which is not what is happening right now with some of the larger models; that's what all the lawsuits are about. We're going live with our first product in a couple of weeks where we have figured out a way to pay the original content creators for the portions of their content that's used to generate an answer to a technical question and we're proud of that. That's ethical AI, that's doing the extra work to make sure. The people that really have been the basis of our business are properly remunerated for their quality content, right?