Twenty-five years in the past, Jay Bavisi based EC-Council within the aftermath of 9/11 with a simple premise: if attackers perceive techniques deeply, defenders want to know them simply as properly. That concept led to Licensed Moral Hacker (CEH), which went on to change into one of the vital widely known credentials in cybersecurity.
Bavisi thinks we’re at the same inflection level once more—this time with AI.
The know-how is transferring quick. The workforce isn’t. And similar to the early days of software program growth, many of the consideration is on what AI can do, not on how you can deploy it safely, responsibly, or at scale.
“We’re again in that period the place constructing one thing feels cool,” Bavisi informed me. “Within the early days of internet growth, safety and governance had been afterthoughts. We’re doing the identical factor once more with AI—performance first, use circumstances first, and solely later asking what the dangers are.”
That’s the hole EC-Council is attempting to deal with with the most important growth of its portfolio in 25 years: 4 new AI certifications and a revamped Licensed CISO program.
The Abilities Hole Isn’t Hypothetical
The information behind this push isn’t delicate. IDC estimates unmanaged AI threat may attain $5.5 trillion globally. Bain initiatives a 700,000-person AI and cybersecurity reskilling hole within the U.S. alone. The IMF and World Financial Discussion board have each landed on the identical conclusion: entry to know-how isn’t the constraint—persons are.
I’ve spent the final couple of years speaking with executives about AI, and the tone has shifted. Early on, practically everybody insisted AI wasn’t going to switch jobs. It grew to become nearly ritualistic. Comprehensible, positive—however not fully trustworthy.
Currently, the messaging has modified. Some roles will disappear. That’s not controversial anymore. The extra correct framing has at all times been: AI in all probability received’t take your job, however somebody who is aware of how you can use AI higher than you may. That’s the true threat—and the true alternative.
What EC-Council Is Really Launching
The brand new certifications are constructed round a framework EC-Council calls ADG: Undertake, Defend, Govern. It’s meant to provide organizations a approach to consider AI intentionally, reasonably than defaulting to “simply purchase a subscription and see what occurs.”
“It’s not nearly choosing Claude or Gemini or GPT,” Bavisi mentioned. “Your information, your buyer data, your online business processes all get pulled in. You want guardrails.”
The 4 certifications are role-specific:
- AI Necessities (AIE) is baseline AI fluency—sensible, not theoretical.
- Licensed AI Program Supervisor (C|AIPM) focuses on implementing AI packages with accountability and threat administration.
- Licensed Accountable AI Governance & Ethics Skilled (C|RAGE) targets governance gaps, aligning with frameworks like NIST AI RMF and ISO/IEC 42001.
- Licensed Offensive AI Safety Skilled (COASP) teaches practitioners how you can assault LLM techniques so that they perceive how you can defend them.
That final one feels particularly on-brand. It’s basically the CEH mindset utilized to AI: you may’t defend what you don’t perceive.
Why This Isn’t Educational
Bavisi shared a latest instance that places the urgency into perspective. EC-Council took half in a managed check with a top-ten international insurance coverage firm. They in contrast conventional human-led pen testing towards the AI method.
Throughout three rounds, people discovered 5 complete vulnerabilities. The AI discovered 37.
That’s not an indictment of human ability. It’s a reminder that AI doesn’t get drained, doesn’t neglect, and doesn’t function inside the similar constraints. The job doesn’t disappear—however the expectations round the way it’s achieved change dramatically.
The CISO Function Is Altering Too
Alongside the AI certifications, EC-Council up to date its Licensed CISO program to model 4. Safety leaders are actually accountable for techniques that study, adapt, and make selections autonomously, however that’s not what most CISOs educated for a decade in the past.
The up to date curriculum displays that actuality—much less guidelines safety, extra governance, threat possession, and accountability in AI-driven environments.
Why This Issues
Certifications don’t magically make somebody an knowledgeable. I’ve collected sufficient of them over time to know that. However they do matter. They open doorways. They sign baseline competency. And proper now, that sign carries extra weight than common.
“There are cloud engineers and GRC professionals in all places asking the identical query,” Bavisi mentioned. “How do you do governance and threat with AI? Till now, there haven’t been actual frameworks or actual coaching packages.”
AI isn’t slowing down. The workforce has to catch up. EC-Council is betting that structured, role-based training—grounded in sensible actuality reasonably than hype—might help shut that hole. Given what they did with CEH, it’s a wager value being attentive to.

