How Artificial Intelligence Is Redefining Human Intelligence

Key takeaways

  1. AI will handle most day‑to‑day IT and security tasks, while humans remain responsible for priorities, impact, and ethics.
  2. While some routine tasks are being automated, new and more impactful roles are emerging for people who guide, question, and govern how AI is used.
  3. The long‑term winners will be humans who focus on judgment, context, creativity, and leadership—using AI as a powerful engine, not competing with it.

AI Runs the Stack. Humans Run the Stakes. We Both Win.

Will AI take my job? That’s the billion (trillion)-dollar question everyone’s been asking, and many are predicting.

If you lead a NetOps, ITOps, or SecOps team right now, you are probably watching AI automate the work your people have spent careers mastering, and asking what that means for your team, your budget, and your own job.

Here is the uncomfortable truth, and the more important truth behind it.

The uncomfortable truth: AI is already outperforming humans on structured tasks including ticket triage, anomaly detection, log correlation, vulnerability scanning, and first-pass incident response. The systems are faster, more consistent, and improving every quarter.

The more important truth: the roles being displaced are narrower than the roles being created. The professionals who thrive will not be the ones who resist AI, nor simply the ones who learn to manage and orchestrate it. They will be the ones who understand where AI structurally fails and build their career around exactly that.

The Intelligence Shift: The IQ AI Now Owns

For most of the 20th century, IQ was the gold standard of human cognitive capabilities. It measured pattern recognition, verbal fluency, logical sequencing, and numerical reasoning. Schools optimized for it. Corporations hired around it. Entire life trajectories bent toward it. Then AI arrived and quietly made it irrelevant.

By 2025, frontier AI models were scoring in the 99th percentile on IQ-style benchmarks. GPT-4 outperformed most humans on the SAT, the Bar Exam, and the USMLE. OpenAI's o3 scored 87.5% on ARC-AGI, a test designed specifically to challenge machines on novel human reasoning. The skills we built IT certification programs around, including recall, pattern matching, and structured problem-solving, are now commodities available to anyone with an internet connection.

Operationally, this is how AI agents can play out in real time:

Agentic AI, meaning systems that do not wait for prompts but instead observe, reason, and act autonomously, is making the 2023 model of AI as copilot obsolete. In 2026, AI does not wait for your question. It is already acting on it.

The UniOps Revolution: Silos Are Gone

For decades, IT operations ran in parallel lanes. Networking teams looked at packets. Security teams looked at threats. Cloud teams looked at compute. Each had its own dashboards, language, and definition of normal.

With a unified data fabric, AI has dissolved those silos into what practitioners are calling UniOps, or Unified Operations. An AI agent does not categorize telemetry as belonging to NetOps, ITOps, or SecOps. It sees a single stream of interconnected data and recognizes, in real time, that a latency spike in the network layer, a surge in failed authentication attempts in the identity provider, and unusual database query volume from a service account are three symptoms of the same event: a credential-based intrusion unfolding across your stack simultaneously.

The Hard Reality

No human team, however talented and well-coordinated, can correlate millions of data points across three siloed operations domains in real time. AI can. But here is what no AI can do: decide what that event means for your organization, your users, and your obligations to both.

The New Division of Labor: What AI Doesn’t Own

The most useful framework for understanding where humans fit comes from neuroscience, and it maps onto modern operations with precision.

AI Is Your Autonomic Nervous System

You do not consciously direct your white blood cells to attack an infection. Your autonomic system handles that continuously, invisibly, and at speeds your conscious mind cannot match. Agentic AI now does the same for your infrastructure, detecting, correlating, and remediating at machine speed before a ticket has been opened.

You Are the Prefrontal Cortex

You provide the executive function that governs the autonomic system. You do not compete with it for speed. You determine what it is allowed to do, why, and when it is wrong.

Consider the difference:

AI understands metrics. Humans understand meaning. In high-stakes operations, meaning determines whether a technically correct decision is actually the right one.

The 2026 Scorecard

Where do humans and AI genuinely stand across the dimensions that matter most in UniOps?

Capability
AI Strengths
Human Strengths
2026 Edge
Speed & Scale at 2 AM
Terabytes/sec, tireless
Limited, fatigues
AI
Pattern Recognition
Superhuman across all data
Solid, sparse data
AI
Correlating NetOps + SecOps + ITOps
Real-time unified telemetry
Siloed, delayed
AI
Context & Organizational Meaning
Metrics only
Understands stakes & politics
Human
Ambiguous Crisis Response
Struggles with contradictions
Ethical, adaptive judgment
Human
Insider Threat Nuance
Flags statistical anomalies
Reads full human context
Human
Emotional Intelligence
Simulated, unconvincing
Genuine, builds trust
Human
Security Architecture
Technically correct
Shaped by real breach experience
Human

The pattern is clear: AI dominates whatever can be formalized and scaled. Humans are decisive wherever organizational context, ethical stakes, and lived experience are required.

The Five Human Capabilities That Matter Now

These are not soft skills. In an agentic environment, they are the executive function layer that determines whether autonomous systems serve your organization's real goals or optimized toward the wrong ones.

1. Curiosity: Ask the Questions AI Didn't

An AI agent answers the question it was asked to answer. It will not ask whether that was the right question, or wonder whether the alert pattern it is optimizing against is masking a deeper architectural vulnerability not yet visible in any metric.

Practical Skill

Interrogate AI outputs rather than simply consuming them. Ask: why did the agent reach this conclusion? What organizational context (engineering, support, renewals, documentation, etc.) did it not have access to? What question did it never think to ask? Unexamined trust in autonomous systems is one of the most consequential and underappreciated risks in modern IT operations.

2. Intuition: Pattern Recognition Built from Real Incidents

The senior security engineer who walks into a war room during a major incident and senses, before opening a single dashboard, that this is a coordinated supply chain attack rather than a routine failure is drawing on compressed lived experience that no model can replicate. That recognition emerges from years of high-stakes operational presence.

Practical Skill

Intuition is built deliberately through tabletop exercises, systematic post-incident analysis, and mentorship with experienced operational practitioners. Start with small projects and build trust in your AI outcomes as you expand your collaboration. The goal is not to replicate AI pattern recognition. It is to develop the human judgment that knows when to trust it and when to override it.

3. Ambiguous Problem-Solving: When the Playbook Dissolves

Real crises arrive with contradictory telemetry, simultaneous failure modes, political pressures, and human stakes that cannot be reduced to an optimization function. When a major fiber cut occurs simultaneously with an active ransomware intrusion, AI systems struggle precisely where humans are decisive: incomplete data, conflicting priorities, and decisions that carry real accountability.

Practical Skill

Red team your own playbooks by asking: what happens when two critical priorities directly conflict? Build escalation frameworks that explicitly define where AI authority ends and human judgment must begin, not during the crisis, but well before it.

4. Emotional Intelligence: The Failure Mode Your Monitoring Stack Can't See

Run a post-mortem on your last three major incidents. Somewhere in that timeline, there is almost certainly a human coordination failure your observability platform never logged: the finding that did not get escalated, the decision made incorrectly under executive pressure, the two engineers who quietly started job-searching after an incident that was resolved on paper but felt like a failure to everyone in the room. That is EQ showing up as operational risk, and it is the one gap no monitoring tool will ever surface for you.

Practical Skill

After your next major incident, go beyond the technical timeline. Where did communication break down? Where did someone know something and not say it? That is your EQ gap map, and it is more actionable than any leadership certification.

5. Creativity From Experience: The Architecture Only You Could Build

There is a meaningful difference between a disaster recovery plan that looks correct in documentation and one built by someone who was present when a previous plan failed. AI can generate a technically sound architecture. It cannot generate yours, the one shaped by your specific failures, your specific users, and the moments that taught you what resilience actually means in your environment.

Practical Skill

Document and systematically apply your operational history to share with AI. The incidents you have lived through, the failures you have owned, and the recoveries you have led are not resume entries. They are a compounding asset that no AI trained on generalized data can replicate.

The Bottom Line for IT and Security (and UniOps) Leaders

Our society has valued human IQ from a quantitative standpoint for so long now—building elaborate spreadsheets or using logic to solve a complex problem, I think it will be hard for us to measure ourselves differently. You’d be right to be skeptical of what I wrote above. After all, how do you measure curiosity? But we all know that we need to think differently in this new world of AI.

The rise of agentic AI does not signal the end of the Network Engineer, the SOC Analyst, or the IT Operations lead. It signals the end of a definition of those roles that was always too narrow and evolves a good engineer into a great one!

If your professional identity is built around configuring devices, closing tickets, or reviewing logs, you are competing with systems that are faster, cheaper, and tireless. That is a competition you will not win.

If your professional identity is built around business judgment and ensuring that fast autonomous systems serve the right goals, make defensible decisions, and remain accountable to the humans who depend on them, you are not competing with AI at all.

You are providing the one thing AI cannot replicate: the judgment that decides what technology can be used and trusted. The wisdom that determines what is worth defending. The conscience that understands what secure and resilient really mean when human livelihoods are on the line.

It would be disingenuous to say that I know how AI will change our world. No matter how many people are predicting the future of AI and the demise of this or that, I don’t think anyone really knows. However, humans have adapted through the industrial and digital revolutions, and we will adapt again. If we can begin to understand how humans have different strengths that AI can’t have, we can start to go down the path of coexistence and growth.

Related Articles

PowerShell Web Access: Your Network's Backdoor in Plain Sight
Security
14 Minute Read

PowerShell Web Access: Your Network's Backdoor in Plain Sight

The Splunk Threat Research Teams dives deep into PowerShell Web Access (PSWA) exploring its functionality within the context of cyber threats.
SUPERNOVA Redux, with a Generous Portion of Masquerading
Security
10 Minute Read

SUPERNOVA Redux, with a Generous Portion of Masquerading

A review of the Pulse Secure attack where the threat actor connected to the network via a the Pulse Secure virtual private network (VPN), moved laterally to its SolarWinds Orion server, installed the SUPERNOVA malware, and collected credentials, all while masquerading the procdump.exe file and renamed it as splunklogger.exe.
Macro-ATT&CK 2024: A Five-Year Perspective
Security
6 Minute Read

Macro-ATT&CK 2024: A Five-Year Perspective

Splunk’s Ryan Fetterman and Tamara Chacon dive into attacker techniques, trends, and blue team tips for analyzing and visualizing data from the past year.