Microsoft and GitHub

The Assimilation You Didn’t See Coming (Until You Did)

Pull up a chair and grab some caffeine, there’s big news brewing at the intersection of Microsoft and GitHub, AI, and the unsung heroes of cybersecurity (those of us in the trenches).

Let’s sum up recent events:

A CEO Steps Down… And GitHub Loses Its Foil

On August 11, 2025, GitHub CEO Thomas Dohmke dropped the mic: he’s resigning, effective at the end of this year, to follow his perennial calling back to startup life. If that name rings a bell, he’s the chap who led GitHub through its period of massive growth: from under 80 million users to over 150 million, presiding over a billion repositories and forks. Essentially, Tom knows what he’s doing.

GitHub Gets Folded Into CoreAI: Welcome to the Corporate Family

But wait, there’s more. There’s no new CEO waiting in the wings. Instead, GitHub’s leadership now reports into Microsoft’s CoreAI org. Home to AI strategy and execution, under Julia Liuson, via Jay Parikh’s senior leadership team. Independence is out. GitHub just went from semi-autonomous to “all-in” Microsoft AI. 

*pause to let that sink in*

Microsoft Copilot vs. GitHub Copilot

Copilot isn’t just a supporting character anymore. It’s now the lead and the star of the show. With over 20 million users, GitHub Copilot has morphed from autocomplete wizardry into a full-on conversational AI coder, with features like Copilot Chat and Voice on-call to round the edges off so that (on the surface at least) users can’t cut themselves.

  • Copilot Chat is GitHub’s “conversational AI tool” that lets developers query, debug, and generate code in natural language directly inside their IDE.
  • Copilot Voice on-call allows developers to control Copilot hands-free with voice commands, dictating code requests or questions aloud instead of typing.

I, for one, love living in the future, but Microsoft’s broader AI ambitions mean Copilot won’t just be a GitHub tool. It’s part of the entire Microsoft GitHub ecosystem, pushing AI-enriched development experiences across Azure, Visual Studio, and more, and, honestly, this is just how it is from now on.

The Developer Community Reacts… With a Sigh (or Maybe a Gasp)

For many, Microsoft’s assimilation of GitHub is a nail in the coffin of platform trust and further heralds the rise of enforced AI extraction. Some devs are rolling their eyes. Some are sharpening their pitchforks.

Surprise and uproar came when Dohmke declared in early August, “Use AI or quit coding” [Times of India], a statement met with shock, backlash, and existential dread from those who still value sweat-earned code and the crunch over auto-generated boilerplate.

Meanwhile, justified or not, across the cybersecurity grapevine (yes, Hacker News), the discussion is less about Copilot's prowess and more about “betrayal”: “GitHub is no longer independent,” some commenters sigh, likening this to past acquisitions like Xamarin that quietly faded away after getting cozy with Microsoft.

What This Means for Cybersecurity

It’s our world in which this storm is brewing, so let’s consider the potential consequences:

  1. AI-Generated Code May Be Vulnerable
    The research isn’t kind. Nearly 40% of Copilot code can harbor vulnerabilities in high-risk scenarios [CCS]. Another study [Veracode] flagged almost 30% of Python and 24% of JS snippets from Copilot included security weaknesses across 43 common weakness enumeration (CWE) categories, some among the top 25.
  2. A Wider AI Attack Surface
    With Microsoft GitHub pushing AI deep into development workflows, we now face a potentially expanded attack surface and an increase in cybersecurity liability. Malicious actors might exploit AI-generated features, or the outputs they generate, to slip vulnerabilities under our radar.
  3. The Takeaway?
    AI tools speed us up, but we still need to scrutinize the code they give us with the same paranoia we’d use for Net‑NTLM sniffers.
  4. AI Boosts Productivity, When It Doesn’t Break Stuff
    On the brighter side, AI can cut dev time dramatically. In one study, developers were ~56% faster using Copilot. So yes, it’s powerful, but only as safe as our human oversight.
  5. Security Automation Needs to Catch Up
    AI-generated code must pass through iron-clad security pipelines. Augmented static analysis, perhaps AI-assisted itself, needs to become the norm, so “just trust AI” can never become the default policy.
  6. Microsoft Should be Accountable
    As GitHub becomes more deeply woven into Microsoft’s AI strategy, transparency around how Copilot is secured, tested, and updated with, at minimum, basic knowledge of the top 25 CWEs must become non-negotiable. It’s our job to both demand and implement.
Microsoft GitHub Is Evolving: Cautiously Observe and Adapt

Microsoft GitHub is no longer just the friendly, quasi-neutral code repository; it’s increasingly the flagship of Microsoft’s AI-first dev strategy. This evolution carries both promise and potential peril:

  • Promise: AI tools can sharpen developer productivity and usher in new efficiency models.
  • Peril: Unchecked AI-generated code may open doors to new vulnerabilities and irrigation channels for supply-chain compromises.

As cybersecurity pros at SMEs, or anywhere with a stake in code safety, this means leaning into the AI tide with vigilance:

  • Audit Copilot outputs, even when they look neat.
  • Automate security gates in continuous integration (CI) from the very first build, so every commit and pull request is tested for vulnerabilities before it ever gets near production, especially if AI-generated code is involved.
  • Keep the user community informed about the risks of “Use AI or quit coding” rhetoric. It’s how we build resilience, not shock conformance.
A Backbone of Trust

At Trustle, we’re the watchdog at the gate, ensuring that what Copilot (or any automation) proposes can’t quietly sneak vulnerabilities into our cloud. Trustle gives organizations the guardrails and oversight layer that GitHub itself doesn’t.

Vigilance (keeping AI code in check):

  • Automated entitlement reviews: So if Copilot (or a tired human) drops insecure identity and access management (IAM) permissions into Terraform, Trustle flags and revokes them before they harden into risk.
  • Just-in-time access (JIT): Makes sure any AI-assisted code deployments don’t leave behind permanent standing privileges. The bot gets what it needs only when it needs it.
  • Zero Standing Privileges (ZSP): Every account, whether human or automation, is constantly pared back. AI security is paramount, and if AI-generated code tries to rely on “always-on” over-privilege, it won’t fly.

Community support (shared learning and transparency):

  • Chat-driven access requests in Slack/Teams: Teams see and approve access changes openly, which helps surface when AI-generated scripts are pushing for unusual privileges outside of ChatOps.
  • Audit-ready workflows: Everything is logged, transparent, and explainable, so when the community (internal security engineers, auditors, compliance reviewers) wants to review AI-impacted deployments, the evidence is already there. Brace for international cybersecurity standards making more of this in the future.

Microsoft GitHub is going to be the place to watch. Let’s stay curious, discerning, prepared, and ready to pilot (not just code) as we head into this crazy AI-augmented future.

Nik Hewitt

Industry

August 22, 2025

Read More Blogs

Don't fall behind the curve

Discover powerful features designed to simplify access management, track progress, and achieve frictionless JIT.

Book a Demo