AI Expert Noelle Russell Launches New AI Masterclass
- info@onlinesafely.info
- May 14, 2024
- 8 min read

Noelle Russell, AI expert, talks with Tara Williams, founder of Onlinesafely.info, on May 10, 2024.
Does it seem like everyone is talking about AI? After learning about the U.S. State Department-commissioned report about the threats that AI poses and reading online articles about racist and biased AI products, I feel like this technology is primed to wipe out democracy and humanity any day.
Amid these doom and gloom scenarios is Noelle Russell, an AI lover and technologist who believes artificial intelligence can help people and businesses.
This week, Russell launches her AI Leadership Masterclass, a five-week AI leadership certification program with five online modules and 25-plus videos focused on AI. The course is offered through the AI Leadership Institute, an organization Russell established in 2015 with grants to educate boards about conversational AI agents from Microsoft and Amazon (it is currently a for-profit business).
The first week of the program focuses on AI foundations and the evolution of AI. The subsequent weeks concentrate on leveraging AI for business; building AI ethically and responsibly; and managing and deploying AI projects at scale.
The last week of the masterclass session Russell conducts three live learning labs where participants “build out” use cases or scenarios using solution playbooks that Microsoft and Amazon donated to the Institute.
There are no prerequisites like specific software programming languages or solutions architecture experience that participants must have to take the course. “I have people in my community that are executive farmers. In other words, they run dairy farmer conglomerates. I also have people from the adhesive and sealant community. I also have people from all over. I have geneticists in the community. So, these are all different people who wanna learn AI and then apply it to their specific line of work. They have no classical training in this space whatsoever,” Russell shared. (Full disclosure: this writer belongs to the AI Institute’s free I ♥️AI Community).
Russell infused the information she developed at the Institute to help build an AI curriculum at Miami Dade College as a part of the MDC AI BILT Team about 2 years ago.
The mother of 6 and science fiction fan entered AI software development when Amazon founder Jeff Bezos recruited her to develop Amazon’s Alexa audio device in 2014.
“I always say the story is that Jeff Bezos sent an email — and I always joke — he sent an email and it was in my inbox. I'm like, he wrote me an email. Then I always say like, well, he didn't write it to me, per se. It's not like he was like, 'Hey, Noelle,' but I felt like he was speaking to me,” Russell shares.
As Employee No. 10 with the Alexa team, Russell built over 100 audio applications during her first year at Amazon, including Daily Affirmations and One Minute Mindfulness, as a solutions architect.
Since then, Russell has led teams that developed and launched AI applications and AI programs at Accenture, IBM, Microsoft and NPR. She serves as a senior research fellow with AI2030. She has spoken about inclusive, ethical and safe AI and using AI in business applications like customer service and property investment analysis at technology, cybersecurity, broadcast news and other industry events around the United States (she recently returned from a conference in Auckland, New Zealand).
She also started The Lamplighter Effect, a podcast focused on leadership principles.
Tips for Women of Color and Others Who Want to Enter the AI Field
As a Latina woman who has worked in AI for about 10 years, what advice would Russell give to folks from marginalized communities interested in entering AI as a software developer, solutions architect, business and/or marketer?
“I think the biggest thing is finding a community. I mean, it’s the reason I built the I ♥️AI community is I realized that many times we have this ambition, and we feel kind of alone in that ambition. We think we're the only one that wants to do kind of crazy things, or only one who wants to go down a certain path. And oftentimes — especially if you're underrepresented, a woman, a woman of color, even men, I mean, now sometimes, I run into just divergent thinkers. And those divergent thinkers might not be people of color. They just might think differently, and they feel very alienated in these rooms. They feel very alienated on the teams that they're on. And so I always say, like, those of us who are neurodiverse, it's just as hard for those of us who like, that you can visibly see that we're different.
But my advice is always pretty much the same. Like, you're not alone. Number one, the best thing you can do is start to be candid and transparent about your experiences and find your tribe. Like, find your people. That's why I created the I ♥️AI group so that I have a place where I could be like, oh my gosh, this thing happens. And I'd have a group of people who understood and, you know, cheered me on, cheer each other on. We share resources. It becomes not me just trying to figure it out by myself, it becomes me, plus now we have like 350 people in the community. So it's super fun. I love it. And every time I'm in a city now there’s someone from the community in that city, which is super awesome.”
Russell also said that it’s important for individuals to have resilience, a mission and purpose-driven mindset and a long-term vision of the outcome they want. They are going to run into a lot of resistance in the short term because people won't understand their vision.
How Should People Design AI Systems that Avoid Racist and Biased Algorithms?
“I think it's actually very much in alignment with the same things I'm telling organizations now about cybersecurity. There are best practices right now in building these systems, AI systems specifically, but really technology systems in general. These practices are not new. They're well-architected patterns that we know work. In cyber, for example, there's just very baseline things that you can do that are not hard, but for some reason people don't do them. I don't know if they don't have time or they don't prioritize it, but it then makes them very susceptible to cyber attacks. Very easy, simple cyber attacks like ransomware. Like, we can completely eliminate ransomware. If you just do a couple of things like encrypt all of your information and duplicate it.
It's not hard to duplicate every piece of information. So if someone says they took it from you, they don't have your only copy. Like, that should be the first thing you do. But most companies don't do that. And I don't know why sometimes I hear them say they're worried about cost of duplication. I'm like, yeah, but now you're gonna pay a ransom. Like, let's think clearly about this. So I think there are patterns of successful mitigation strategies that are already well-known and well-documented. We are now at the point where they're even so well-documented that the government has documented them. And that's how you know, things are pretty mature if the government has had a chance to figure out, and it takes them years to do. So, figure out how to document this. The NIST framework, for example, for risk mitigation strategies, they just came out with an entire framework. It took them 10 years to write, but it is AI-focused.”
Russell also says people should use a secure cloud infrastructure architecture in which to build an AI model. They also need to ensure their model is fair. To do this, she recommends using metrics or a benchmark that provides a holistic evaluation of language models, like Stanford University’s HELM system.
“When you choose your model, you should know that there are systems like HELM, HELM Lite that you can use to actually evaluate that model before you even buy it before you use it before you invest in it, you can figure out what is its racism, sexism, self-harm, toxicity metrics of performance so that you don't walk into something that's already toxic before you even begun. And there are models that are like this,” Russell says.
“The last two are a little bit more fun,” Russell shares. “You need to have a team of people that know how to talk to a machine and how to tell a machine how to avoid racist, sexist, self-harm-type conversations. And that means they need to be good at a term now we use — prompt engineering. I don't think it's a role per se, it's more of a bullet point on a role you might have.
But you should know that there are specific ways that you talk to a model. One of them is called a system prompt that you create a message that you tell a robot, Here's how I want you to behave. And that behavioral setting is not, it is not just like, Hey, don't tell lies and be nice when you answer. Like it needs to be pretty robust. It needs to handle making sure people don't try and ask you to have this model say things it's not supposed to say or divulge secret information, or let people get trade secrets where they're not available. You have to tell the model what to say and what not to say. And then on top of all that, you need users to test it out before you go to production. I’d say probably 80 percent of projects that failed in production failed because they didn't do enough user testing. And sometimes they did no user experience work at all before they deployed, which is shocking to me. But many data scientists have no relationships with UX at all. And that is something I hope to change.”
How Does the Award-Winning AI Expert, Speaker, Mother, Spouse and Caretaker Stay So Positive and Energized?
“One, I do have a pretty rigorous self-care routine. I do hot yoga. I do very hard things. I have an ice bath, we have an infrared sauna. So I'm always putting myself in very hard situations temporarily, intentionally, what do they call it? They call it voluntary exposure effect or something like that, which means you voluntarily expose yourself to really hard things. So when hard things happen, it actually doesn't seem that hard because you're like, Oh my gosh, I was just in 40-degree water this morning. Nothing bothers me. Bikram yoga is like 90 minutes in a 104-degree room doing really hard yoga postures. It’s not fun. It's horrible. You think you're gonna die, but then you're done. You're like, well, I could do anything now.
But I will say one of my mantras is to be the most positive person in the room. And I think it's an important mantra because saying Be in Front of It means not that I am, not that I feel, but that I'm gonna choose. I'm gonna make a selection to just be the most positive person in every room I go in, including my apps. And that doesn't mean all the time. It doesn't mean 24/7, and it doesn't even mean that I'm fake about it. It just means I'm gonna choose not to see whatever's frustrating me or aggravating me, or whatever obstacles I have to face. I'm gonna be extremely present. And when you're present, you can be grateful for so much. And in that gratitude, you inherently become more positive. I mean, there's science behind this, right? Lots of research around the happiness effect that comes from just being grateful.”
Comments