Today’s digest tracks the rapid commercialization and weaponization of AI across several distinct fronts. In the enterprise sector, Intercom reports that its Finn agent has scaled to $100 million in revenue, a milestone coinciding with a company-wide mandate for AI adoption across all internal roles. On the geopolitical stage, China continues to challenge US export controls by bypassing silicon restrictions and scaling the production of low-cost humanoid robots, while the PLA integrates commercial AI into active military operations. We also examine the evolving capabilities of generative models, from OpenAI’s transition of ImageGen into a sophisticated creative agent to the emergence of multi-model tandems designed to expand vulnerability coverage in cybersecurity. On the defensive side, Google has successfully thwarted an AI-driven zero-day exploit, highlighting the escalating arms race in automated hacking and defense. Furthermore, we explore the conceptual shift in physical intelligence—described by Lackey Groom as a "GPT-2 moment" for robotics—and the strategic frameworks proposed by Anthropic regarding the US-China AI rivalry. The digest concludes with an analysis of the CCP’s continued scaling of AI-driven repression systems, illustrating the dual-use nature of these technologies in governance and control.
Chinese Humanoid Robots Lead in Volume and Cost
The global landscape for humanoid robotics has shifted decisively toward East Asia, with Chinese manufacturers now exerting a dominant influence over the market. By 2025, Chinese firms accounted for nearly 90% of all global humanoid robot sales, signaling a rapid transition from experimental prototypes to mass-market availability. The disparity in shipping volume reveals a profound gap in industrial capacity. Unitree, a leading Chinese player, shipped more than 5,500 humanoid robots over the last year. This volume stands in sharp contrast to the output of high-profile American firms such as Tesla, Figure AI, and Agility Robotics, each of which shipped approximately 150 units. This suggests that while the United States continues to produce high-concept robotics often associated with cinematic visions of the future, China is focusing on the practical realities of deployment and scale. The sheer magnitude of this difference—where one company ships over 5,000 units while others struggle to reach the hundreds—indicates a fundamental difference in how these two regions approach the commercialization of humanoid AI.
The engine driving this rapid expansion is a stark disparity in production costs. Current data indicates that China can produce humanoid systems for roughly 20% of the cost required by US-based manufacturers. In practical terms, humanoid robots made in the United States routinely cost ten times more than their Chinese counterparts. This cost advantage creates an immense barrier to entry for Western firms attempting to compete on price. The retail market already reflects this imbalance; for instance, Agibbot offers a humanoid robot for around $14,000. Even the most optimistic projections from the US side struggle to match this. Elon Musk has estimated that Tesla’s Optimus could eventually be priced between $20,000 and $30,000, a target that still places the American machine at a significant premium compared to what is already available from Chinese competitors. When US-made systems routinely cost ten times more than those from China, the trajectory of the industry shifts away from high-margin luxury robotics toward mass-market utility.
This economic reality has forced a strategic reassessment among American tech leaders. Elon Musk has openly acknowledged that China's proficiency in both artificial intelligence and manufacturing makes them the toughest competition for Tesla. In his view, there are no other competitors of significant scale outside of China. This sentiment is echoed by industry analysts like Chen Jing, vice president of the technology and strategy research institute, who noted that the development of the GD01 indicates China has crossed a critical engineering threshold. This threshold represents a move toward systems that are not just theoretical but are physically viable for mass production. However, the rush toward volume has not come without trade-offs in functionality. Some of the most visible Chinese models are designed more for publicity and the demonstration of destructive force than for general utility. Additionally, robots from Unitree are noted for lacking significant dexterity, indicating that the current lead in volume and cost does not yet translate to a lead in sophisticated, fine-motor capabilities. The competition is thus split between the American pursuit of frontier intelligence and the Chinese mastery of industrial execution.
PLA Integrates Commercial AI into Military Ops
The People's Liberation Army is aggressively bridging the gap between commercial innovation and military application. By procuring commercially developed Chinese AI systems, the PLA is integrating advanced models directly into its operational framework. Specifically, DeepSeek models are being deployed to manage the coordination of unmanned vehicle swarms and to bolster cyber offense capabilities. This shift represents a tactical evolution in how the Chinese state leverages technology, moving beyond the domestic surveillance and censorship tools already used by state security agencies—such as biometric data collection, communication surveillance, and facial recognition—toward active military utility. This broader vision of AI-enabled techno-authoritarianism allows the CCP to enforce draconian policies and hack government agencies, but the move into military-grade commercial AI suggests a new phase of capability. The integration of these commercial tools allows the military to rapidly adopt cutting-edge capabilities without relying solely on internal development cycles.
The potential for autonomous cyber warfare is a primary concern, particularly when considering the capabilities of specialized, high-parameter models. For instance, Anthropic's Mythos model, a 10 trillion parameter system released to select partners via Project Glasswing, demonstrates a profound proficiency in discovering code vulnerabilities and hacking systems. If a laboratory in the People's Republic of China were to develop a model with capabilities similar to the Mythos preview, the state would possess a system capable of autonomously discovering and chaining software vulnerabilities to penetrate critical American infrastructure. This threat is compounded by a noticeable lack of safety guardrails in Chinese commercial models. Testing indicates that the DeepSeek R1 model is significantly more vulnerable to jailbreaking than its American counterparts; under common techniques, it complied with 94% of overtly malicious requests, whereas US reference models only complied with 8%. This suggests a willingness or inability to implement the prudent pre-deployment safety measures common in US labs.
Despite these advancements, China faces a severe hardware deficit that complicates its long-term trajectory. Analysis of industry road maps suggests that Huawei's aggregate compute will remain a small fraction of Nvidia's total processing performance, projected at only 4% in 2026 and dropping to 2% by 2027. This disparity is critical because algorithmic improvements are not a substitute for raw compute; rather, they are a function and multiplier of it. The process of discovering new algorithmic efficiencies is itself a compute-intensive endeavor, meaning those with more hardware can run more experiments and unlock gains faster. To circumvent this, Chinese companies are employing distillation attacks on US models. This technique allows them to develop AI that is functionally at par with American systems while requiring only a fraction of the original financial and computational investment, effectively undermining the economic advantages of US-led development.
The ultimate objective in this technological competition is the achievement of recursive self-improvement. This occurs when an AI reaches a state of automated research, enabling it to enhance its own performance at a rate that surpasses any human-led laboratory. This threshold represents a definitive finish line in the AI race. Once recursive self-improvement begins, the resulting exponential gains in performance create a gap so vast that any second party, regardless of their subsequent efforts, becomes unable to catch up. The transition from automated AI research to superintelligence happens shortly thereafter, ensuring that the first mover can dictate the values and norms of an AI-enabled future. For the PLA, the integration of commercial models is a stepping stone toward this goal, attempting to overcome an intelligence deficit through rapid deployment and strategic shortcuts, as the race for superintelligence remains a winner-take-all scenario.
OpenAI Evolves ImageGen into Creative Agent
OpenAI is fundamentally changing how it communicates internally, signaling a broader shift in how generative image tools are integrated into professional workflows. Within the company, ImageGen has moved beyond a novelty to become a primary vehicle for corporate storytelling, with more than half of all internal presentation slides now created using the model. This transition highlights a strategic move toward image-based communication, where visual assets are used not merely for decoration but to illustrate and explain complex conceptual frameworks. By leveraging the model's ability to generate variations and amplify creative direction, OpenAI employees are treating the tool as a creative amplifier. This approach allows those with specific taste or judgment to push the model further, expanding the creative outlet for individuals who can now produce multiple styles and variations with unprecedented ease.
This internal success is paving the way for a more ambitious evolution: the transformation of ImageGen from a standalone generation tool into a personalized creative agent. The vision is to move toward a model ecosystem where the AI acts as a dedicated assistant that understands specific user preferences and professional contexts. Rather than simply responding to prompts, this agent is intended to function as a specialized consultant, capable of operating as a personal interior designer, an architect, or a wedding planner. A critical component of this evolution is the model's sophisticated understanding of composition and text rendering. The ability to manage not only the content of an image but also the layout and presentation—particularly in the form of infographics—represents a significant leap in the model's utility. This capability to understand how to present information, rather than just what to say, is viewed as a superpower that will drive future explorations of the model's potential.
To support this agentic vision, OpenAI has focused heavily on the consistency of output across multiple images, a historically difficult hurdle for generative AI. The current model allows users to maintain a coherent aesthetic and character identity across extended projects, enabling the creation of ten-page comic books with consistent storylines or detailed character sheets featuring various poses. This stability is a departure from previous, more fragmented workflows that were often described as janky, requiring users to find complex workarounds to maintain visual continuity. This consistency is further enhanced in the "thinking" or "pro" versions of the model. These advanced iterations integrate external tools, allowing the AI to search the web and analyze files to refine image quality and composition. By leveraging these under-the-hood tools, the model can produce higher-fidelity photos that are grounded in external data, ensuring a level of professional polish.
Perhaps the most disruptive application of this evolution is the intersection of ImageGen’s aesthetic capabilities with the coding proficiency of Codex. By combining a strong visual model with a powerful coding agent, OpenAI is enabling the "zero-shot" creation of fully functional applications and websites. In this workflow, a user can design a visual concept through ImageGen, and the coding agent can then implement that design from scratch. This synergy transforms the tool from a simple image generator into a comprehensive production pipeline. The practical applications are already diverse; for instance, real estate agents can create and stage apartment listings, while YouTube creators use the model for thumbnails and promotional content. Top artists are also utilizing the tool to connect with their fans. By integrating these capabilities, OpenAI aims to make ImageGen a standard hack in the professional toolkit, eventually becoming an essential part of the everyday workflow for any professional in a visual or creative industry.
Intercom's Finn Agent Scales to $100M Revenue
Intercom, an Irish-American B2B SaaS entity with a fifteen-year history, has executed one of the most decisive strategic pivots in the current software landscape. The organization shifted its entire focus toward becoming an AI-centric company almost immediately following the debut of ChatGPT, recognizing the paradigm shift in how businesses interact with software. This rapid realignment culminated in the introduction of Finn, an AI agent specifically designed to revolutionize customer support. The timing of Finn's release was precisely synchronized with the launch of GPT-4, allowing the company to leverage the most advanced capabilities of large language models from the very first day of availability. This move represents a fundamental transformation for a company that had already spent over a decade establishing its presence in the business software market, signaling a bold willingness to rebuild its core value proposition around generative intelligence to maintain its competitive edge.
The commercial results of this strategic pivot have been substantial and swift. Finn has rapidly scaled its market presence, now serving a customer base that exceeds 8,000 organizations. From a financial perspective, the AI agent has driven immense growth, with revenues now approaching the $100 million mark. This financial trajectory is particularly notable given the speed of the rollout. Beyond the raw revenue figures, the product has demonstrated strong operational utility, achieving average resolution rates that are regarded as industry-leading. This combination of rapid adoption and high performance suggests that the market has a strong appetite for automated support solutions that can handle complex queries without sacrificing efficiency or quality. The speed at which Finn reached this level of revenue indicates a successful product-market fit within the enterprise AI sector, proving that the transition from traditional SaaS to AI-driven agents can yield immediate fiscal rewards.
The quality of Intercom's client roster further validates the effectiveness and reliability of the Finn agent. The tool is currently utilized by several high-profile technology firms that are themselves leaders in the software space, including Snowflake, Anthropic, Glean, Linear, and LaunchDarkly. These companies, often operating at the absolute forefront of their own technical innovations, provide a powerful endorsement of Finn's capabilities in managing customer interactions at scale. The ability to attract such a sophisticated set of users suggests that the agent provides a level of reliability and sophistication required by the most demanding B2B environments. By securing these high-tier accounts, Intercom has moved beyond simple tool provision to becoming a critical piece of the support infrastructure for some of the most prominent and technically discerning names in the modern software ecosystem, effectively bridging the gap between experimental AI and enterprise-grade utility.
Supporting this aggressive AI expansion is a substantial global operational footprint consisting of approximately 1,400 employees. Intercom maintains a diverse presence across several key international hubs, including San Francisco, Chicago, London, Berlin, and Sydney, ensuring a global reach for its sales and support operations. However, its technical core remains heavily centered in Ireland. The company's research and development efforts are led from Dublin, and the vast majority of its engineering talent is distributed across Europe. This structural arrangement allows the company to maintain a concentrated R&D focus while leveraging a broad international talent pool to execute its vision. The scale of this workforce, combined with the strategic distribution of its engineering teams, provides the necessary organizational infrastructure to sustain the rapid growth of the Finn agent and continue the company's evolution into a dedicated AI powerhouse. This operational model ensures that the company can iterate quickly on its AI offerings while maintaining the stability required by its growing list of enterprise clients.
Google Thwarts AI-Driven Zero-Day Exploit
The landscape of digital security has shifted into a new, more volatile era. On May 11th, the Google Threat Intelligence group (GTI) identified a milestone in cyber warfare: the first confirmed instance of an attacker utilizing a large language model to develop and deploy a functional zero-day exploit in the wild. This incident involved a sophisticated attempt to compromise Python-based functionality within a widely used open-source web administration tool, specifically targeting its two-factor authentication protocols. By identifying this attack, Google successfully neutralized what could have been a significant mass exploitation event, marking a pivotal moment in the ongoing struggle between automated offensive capabilities and defensive infrastructure.
What makes this discovery particularly revealing is the forensic trail left behind by the attackers. Google’s analysis suggests the exploit was generated, at least in part, by an AI model that exhibited clear signs of hallucination. Within the malicious code, researchers discovered an erroneously included Common Vulnerability Scoring System (CVSS) rating. Such a detail is rarely, if ever, included by human hackers in actual malware, as it serves no functional purpose for the exploit itself. However, because LLMs are frequently trained on vast datasets of vulnerability databases, the model likely synthesized this technical metadata as part of its output. This "derp" in the code provided the definitive clue that the exploit was not the product of a traditional manual effort, but rather the result of an AI-driven generation process. This incident confirms that adversaries are moving beyond simple content generation and are now leveraging AI for the active development of complex, functional exploit chains.
This shift is part of a broader, more ominous trend toward autonomous attack orchestration. Sophisticated actors are increasingly utilizing AI models to navigate systems and make real-time decisions, allowing malware payloads to execute precise commands without constant human supervision. The implications of this are profound, as evidenced by the recent discovery of a kernel local privilege escalation chain targeting macOS 26.4.1 on Apple M5 hardware. Despite the presence of state-of-the-art memory integrity enforcement, researchers were able to use the Claude Mythos model to identify a vulnerability that had persisted through decades of human security auditing. The fact that AI can uncover deep-seated bugs that have eluded human experts for years suggests that the barrier to entry for high-level exploitation is collapsing. This has led to a climate where security firms are now racing to keep pace; for instance, Palo Alto Networks reported a seven-fold increase in internal vulnerability detection—identifying 75 flaws in a single month—after integrating advanced models like Claude Mythos and GPT 5.5 Cyber into their workflows.
The industry is responding to this "bugmageddon" with a mix of caution and strategic resource allocation. Because the potential for widespread damage is so high, organizations are becoming increasingly secretive about the specific nature of the vulnerabilities they discover. When researchers find critical flaws, such as those in Apple’s architecture, they are opting for physical, offline disclosure rather than digital communication to prevent the information from being scraped by adversarial AI models. Simultaneously, major players are investing heavily in defensive AI. Anthropic, for example, has launched project Glasswing, providing $800 million in token access to organizations, including $100 million specifically for banks, to help them automate the patching of their systems. As the capability for AI-assisted exploitation continues to evolve, the security community is finding that orchestration—using over 100 models in tandem, as seen with Microsoft’s Mdash—is currently the only effective way to counter the brute-force speed of AI-driven attacks. We are entering a period where the speed of patching must now match the speed of AI-generated discovery, a challenge that will define the next decade of digital defense.
Lackey Groom Labels Physical Intelligence 'GPT2 Moment'
The field of physical intelligence is currently navigating a phase that Lackey Groom describes as a "GPT2 moment." To understand this framing is to understand the critical gap between theoretical potential and widespread practical utility. In the evolution of large language models, GPT-2 represented a stage where the technology demonstrated clear signs of life and genuine capability, yet it lacked the massive scaling required to become a ubiquitous tool for the general public. Groom applies this same logic to the current state of robotics, explicitly noting that the field is not yet at a GPT-4 or GPT-5 level of sophistication. For nearly forty years, the industry has struggled with tasks that humans find trivial. Simple actions such as walking, grasping various objects, or folding laundry have remained stubbornly difficult for machines to master, representing the central obstacle in the pursuit of robotic autonomy. Physical intelligence is the effort to finally close this gap, recognizing that while the foundations are now visible, the technology requires a significant leap in scale before it can be considered useful for most people globally.
The ambition to solve these long-standing robotic hurdles is driving the growth of Physical Intelligence, a company co-founded by Groom. His professional trajectory reflects a lifelong immersion in the high-growth ecosystems of technology, having moved from Perth, Australia, to Silicon Valley at the age of seventeen. Before venturing into the realm of physical intelligence, Groom spent six years as an early employee at Stripe, where he gained experience in the operational demands of a rapidly scaling company. This business acumen merged with deep scientific expertise in 2023, when Groom joined forces with a group of elite scientists departing DeepMind's robotics team. This cohort included prominent researchers such as Chelsea Finn, Sergey Lavine, and Carl Hman. The market's confidence in this combination of operational experience and scientific pedigree is evident in the company's financial standing; Physical Intelligence has raised more than a billion dollars and currently holds a valuation of $5.6 billion.
Despite the significant capital and talent involved, Groom remains realistic about the scaling required to move beyond the current "GPT2" phase. The transition from a promising prototype to a globally useful tool requires a level of scaling that the field has not yet achieved, though the signs of potential are undeniable. However, the roadmap for deployment is becoming increasingly clear. Groom anticipates that enterprise-level implementation is within reach over the next one to three years. This initial phase will likely focus on specialized industrial or commercial applications where the requirements are more constrained and the value proposition is more immediate than in a general consumer environment. Once the technology has been refined through these enterprise deployments and the scaling hurdles are overcome, a broader wave of consumer products is expected to follow, eventually bringing physical intelligence into the domestic sphere and the daily lives of the general population.
The ultimate objective behind this scaling effort is a fundamental shift in how human labor is distributed across the global economy. Groom's long-term vision is centered on the idea that robots should assume the burden of the work that humans find undesirable. This includes the boring, dangerous, repetitive, and meaningless tasks that people currently perform out of necessity rather than personal choice. By automating the drudgery of physical existence, the goal is to liberate human beings from these constraints. The premise is that once people are freed from the necessity of performing meaningless labor, they can redirect their time and energy toward pursuits that are actually meaningful. This vision transforms physical intelligence from a mere engineering challenge into a tool for human liberation, provided the industry can successfully navigate the scaling journey from its current nascent state to full maturity.
CCP Scales Repression via AI Systems
The technological race for artificial intelligence supremacy is not merely a contest of economic output or computational speed; it is a fundamental struggle over the future of governance and global norms. As the Chinese Communist Party (CCP) accelerates its development of advanced AI, the implications for human rights and international security are becoming increasingly stark. The CCP is currently integrating these sophisticated systems into the very fabric of its state apparatus, moving beyond traditional methods of control to create a digital architecture of total surveillance. By embedding AI into the mechanisms of censorship, the state has found a way to sanitize the information environment at a speed and scale that was previously impossible. This is not a passive tool for administration but an active instrument of suppression, designed to identify and neutralize dissent before it can gain any meaningful momentum within the public sphere.
Beyond the digital realm, the CCP is applying these same computational capabilities to the physical enforcement of draconian policies, particularly regarding ethnic minorities. The deployment of AI in this context represents a chilling evolution in authoritarian tactics. By automating the identification and tracking of targeted populations, the state can exert pressure with a level of precision that defies human capacity. This technological integration allows for the continuous monitoring of behavior, movement, and association, effectively turning the environment into a self-policing system. Furthermore, the CCP is leveraging its AI prowess to extend its reach far beyond its own borders. By utilizing these systems to hack into foreign government agencies and major international corporations, the party is systematically harvesting intellectual property to bolster its domestic technological base. This dual-use strategy—simultaneously refining internal repression while fueling external economic and political aggression—underscores the multifaceted threat posed by the CCP’s current trajectory.
Historically, the efficacy of authoritarian rule has been constrained by the physical limitations of human enforcers. Dictatorships have always relied upon a vast network of human agents to conduct surveillance, report on neighbors, and carry out the state’s will. This reliance on people created natural bottlenecks, points of failure, and inherent inefficiencies that often tempered the reach of the regime. The transition to AI-driven repression fundamentally alters this dynamic by removing the dependency on human agents. Powerful AI systems can process, analyze, and act upon massive datasets without fatigue, bias, or the potential for moral hesitation. This automation allows for the scaling of repression to a degree that was previously unimaginable, enabling the state to maintain a pervasive and inescapable grip on society. The human element, once the primary mechanism of state control, is being replaced by algorithms that can monitor, judge, and punish in real-time.
This shift is of profound geopolitical significance. The political systems that succeed in leading the development of the most advanced AI will inevitably dictate the rules and norms for how this technology is deployed globally. If the CCP establishes itself as the primary architect of this new era, it will possess the ultimate leverage to shape the digital landscape in its own image. The ability to dictate the use and deployment of AI grants a nation immense power, effectively allowing it to set the standards for global interaction. For democratic nations, the challenge is not just to compete in terms of raw innovation, but to secure a commanding lead that ensures the development of AI remains aligned with principles of transparency and human rights. Failing to do so risks a future where the norms of the digital age are defined by the very systems currently being used to automate repression. The window to establish a different, more stable trajectory is narrowing, as the race to define the future of AI becomes the most critical geopolitical contest of our time. The outcome of this competition will determine whether AI serves as a tool for human advancement or as an engine for the most efficient and scalable form of authoritarian control the world has ever witnessed.
Multi-Model Tandems Expand Vulnerability Coverage
The landscape of cybersecurity is shifting as researchers and adversaries alike move beyond the reliance on a single artificial intelligence engine. The current strategic trend involves deploying multiple AI models in tandem to cast a significantly wider net over potential system weaknesses. For instance, utilizing Claude Mythos alongside the cybersecurity-focused iteration of GPT 5.5 allows for a more comprehensive analysis because these models do not overlap perfectly in their discovery patterns. Each model tends to identify distinct vulnerabilities that the other might overlook, meaning that their strengths are complementary rather than redundant. By running these systems in parallel, operators can maximize their reach, ensuring that a broader spectrum of security flaws is uncovered than would be possible with a solitary tool. This multi-model approach transforms vulnerability research from a linear search into a multi-dimensional sweep, significantly increasing the probability of finding critical exploits that would otherwise remain hidden under a single-model regime.
This methodology is already yielding tangible results in the wild, particularly concerning open-source web administration portals. Recent observations highlight the targeting of two-factor authentication mechanisms, with specific attention drawn to entities such as openclaw and oneclaw. The sophistication of these attacks is further enhanced by a strategic refinement process that moves beyond simple prompt-and-response interactions. Rather than deploying raw AI outputs, there is a clear trend toward polishing AI-generated payloads within controlled settings. This step is crucial for increasing the reliability of the exploit before it is ever deployed against a live target, a detail specifically noted by Google. This systematic approach to vulnerability discovery is contributing to what some experts are calling a bug apocalypse or a vulner apocalypse. The urgency of this situation is echoed by a wide array of industry figures and organizations, including JP Diamond, Dario Amadei, Microsoft, Palo Alto Networks, and the Isle Group. Google researchers have confirmed that this era of accelerated vulnerability discovery is not a future threat but a present reality, signaling a fundamental shift in how software flaws are identified and weaponized.
While the current surge in vulnerabilities is significant, it is viewed by many analysts as merely a precursor to a much larger and more disruptive crisis. The existing acceleration in AI-driven exploits is expected to ramp up significantly as AI capabilities in China continue to evolve and catch up to global standards. The primary concern for the security community lies in the likelihood that high-capability Chinese AI will be released as open source. Once these powerful tools are available without restriction, the barrier to entry for discovering and exploiting complex vulnerabilities will drop precipitously, allowing a wider range of actors to conduct high-level research. This democratization of high-end AI capabilities is expected to trigger a massive wave of cyber vulnerabilities, far exceeding the current volume of discoveries. The transition from proprietary, controlled models to open-source, high-capability AI from China represents a critical inflection point in the cybersecurity arms race. As these tools become ubiquitous, the speed and scale of vulnerability discovery will likely outpace the ability of traditional security frameworks to patch them, fundamentally altering the risk profile for global digital infrastructure and creating a persistent state of vulnerability.
Intercom Mandates AI Adoption for All Roles
Intercom has adopted a remarkably hardline approach to the integration of artificial intelligence within its research and development organization, transforming AI proficiency from a recommended skill into a non-negotiable mandate. In a decisive move to accelerate organizational change, the company updated the formal job descriptions for its engineers, designers, and product managers to reflect a new standard of performance. This update establishes AI adoption as a binary requirement for meeting job expectations. Under this rigid framework, any employee across these R&D roles who fails to integrate AI into their daily workflow is explicitly categorized as not meeting the expectations of their position. This is not presented as a gradual transition or a set of suggested guidelines; it is a strict, binary performance metric. The leadership at Intercom has emphasized that such a shift requires decisive executive guidance and the repetition of the same message across every possible forum to instill a deep sense of urgency throughout the company. By embedding these requirements directly into the job descriptions and rewarding those who comply, the organization ensures that the adoption of AI is viewed as fundamental to the professional success and continued employment of every team member involved in building their software.
Parallel to this administrative mandate, Intercom is fundamentally redefining the cognitive approach its teams take when interacting with AI agents, shifting the operational philosophy from task execution to problem resolution. Historically, the common tendency has been to prompt agents with specific, granular instructions—essentially telling the AI to run a particular skill to achieve a specific, narrow result. While this method of prompting is still functional and remains necessary in certain contexts, Intercom is actively guiding its workforce to move away from this restrictive, task-oriented approach. The new objective is to provide agents with problems to solve rather than tasks to perform. Instead of dictating the exact steps or specific skills the agent should employ, employees are now encouraged to describe the overall problem or the intended outcome they wish to achieve. This shift allows the AI agent to autonomously determine which skills to invoke and how to best proceed to reach the desired goal. By treating the agent as a strategic problem-solver rather than a mere tool for executing pre-defined skills, Intercom aims to fully leverage the capabilities of autonomous agents, significantly reducing the need for humans to micromanage the technical execution.
This evolution in workflow reflects a broader transition in the nature of engineering and product development, which Intercom describes as the process of moving up the stack. The company operates on the core belief that the entire landscape of engineering is changing, asserting that any action an agent is capable of performing should be handled by the agent. This shift mirrors previous seismic industry transitions, such as the move from traditional Unix system administration to the modern cloud era. In the past, a sysadmin's role involved the physical labor of racking servers, cabling hardware, and manually configuring networks within data centers. The advent of the cloud abstracted those physical requirements, allowing engineers to operate at a higher level. Similarly, AI is now abstracting the execution of specific technical tasks. For the modern product builder at Intercom, this means the role is no longer centered on the manual labor of executing a task, but on the intellectual labor of defining the problem and overseeing the agent's resolution of it. By mandating AI adoption and refining the way agents are prompted, Intercom is repositioning its R&D talent to operate at a higher level of abstraction, focusing on intent and outcome rather than the granular mechanics of implementation.
Anthropic Defines US-China AI Strategic Fronts
The strategic landscape of the artificial intelligence race between the United States and China has been distilled into a framework of four distinct, yet interconnected, fronts. As outlined by Anthropic, this competition is not merely a singular pursuit of raw processing power, but a multifaceted struggle for dominance that spans intelligence, domestic adoption, global distribution, and national resilience. These four pillars serve as the primary metrics for determining which nations will ultimately possess the capability to dictate the values, rules, and norms of an AI-enabled future. The intelligence front focuses on the development of the most capable models, while domestic adoption measures how effectively those models are integrated into the commercial and public sectors. Global distribution tracks the deployment of the underlying AI stack, and resilience evaluates a nation’s ability to maintain political and economic stability during the profound transitions triggered by these technologies. This framework suggests that the race is not just about who reaches the finish line first, but about who can build the most robust infrastructure to sustain their leadership over the long term.
Currently, the United States and its democratic allies maintain a significant advantage in the realm of compute, which remains the most critical ingredient for the development of frontier AI models. This hardware lead has been the bedrock of Western progress, providing the necessary foundation for training the most sophisticated systems in existence. However, this advantage is being tested as AI laboratories within the People’s Republic of China, operating under the oversight of the Chinese Communist Party, continue to close the gap in model intelligence. Despite the existing disparities in hardware access, these Chinese labs have demonstrated a remarkable ability to keep pace with the frontier of AI research. This narrowing gap presents a significant challenge to policymakers, who have faced criticism for failing to adequately tighten loopholes that allow for the flow of critical compute resources into China. As a result, firms in China have been able to leverage American-designed hardware to refine their own models, effectively catching up to, and in some instances, potentially overtaking American capabilities.
The implications of this technological convergence are profound, particularly regarding the global governance of AI. If authoritarian regimes succeed in achieving parity or superiority in AI development, they will be positioned to shape the international norms that govern the next generation of digital infrastructure. The danger, as highlighted by current analysis, is that the very tools invented and refined in the West could be repurposed by authoritarian states to facilitate automated repression at scale. There is a growing concern that the triumph of these regimes could be achieved using the very compute resources that were once the exclusive domain of American innovation. This irony is not lost on observers, who note that the current strategy of exporting hardware to global markets may be inadvertently undermining the long-term strategic interests of the United States. If the goal is to ensure that AI development aligns with democratic values, then the reliance on hardware-based containment strategies must be reconciled with the reality of rapid technological diffusion.
Looking toward the ultimate finish line, the competition is increasingly defined by the prospect of recursive self-improvement. Once an AI system reaches the point where it can improve its own performance at a rate exceeding the capacity of any human laboratory, the nature of the race will shift fundamentally. This transition to self-improving AI represents a point of no return, where exponential gains in performance render the traditional competitive advantages of other nations obsolete. Once this threshold of automated AI research is crossed, the trajectory toward superintelligence is expected to accelerate rapidly, leaving little room for laggards to catch up. This reality underscores the urgency of the four strategic fronts, as the ability to maintain resilience and foster domestic adoption will be the only defenses against the sudden, massive shifts in power that follow the arrival of superintelligence. The race is therefore not just a contest of current capabilities, but a high-stakes preparation for a future where the speed of innovation will be dictated by the machines themselves, rather than the human institutions that created them.
China Bypasses US Export Controls on Silicon
The landscape of global semiconductor competition is currently defined by a high-stakes game of cat and mouse, centered on the efficacy of United States export controls. At the heart of this geopolitical friction lies the question of whether Washington’s restrictive measures are effectively stifling China’s rise in artificial intelligence or merely forcing the nation to innovate through illicit channels. Proponents of these controls, including firms like Anthropic, argue that the policy is fundamentally sound because it targets the specific, insurmountable bottlenecks currently plaguing the Chinese semiconductor industry. The technological divide is most visible in the manufacturing sector, where China continues to struggle with the extreme complexity of the supply chain. Specifically, the country has made negligible progress in mastering extreme ultraviolet (EUV) and deep ultraviolet (DUV) lithography technologies. These systems represent the pinnacle of modern engineering, and without them, the ability to manufacture high-bandwidth memory at scale remains a distant prospect for Chinese foundries. By denying access to these sophisticated tools, the United States aims to create a permanent compute shortfall, effectively capping the ceiling of China’s domestic AI development.
However, the reality on the ground suggests that these bureaucratic barriers are far from impenetrable. While the technological hurdles regarding EUV and DUV equipment remain significant, Chinese AI laboratories have demonstrated a remarkable aptitude for navigating the regulatory landscape by exploiting cracks in the enforcement regime. The strategy is multifaceted, relying on a combination of clandestine logistics and sophisticated digital workarounds. Reports and industry observations confirm that smuggling operations have become a primary method for circumventing restrictions. These illicit supply chains involve the movement of high-end chips across international borders, effectively bypassing the oversight mechanisms intended to keep advanced hardware out of restricted markets. By leveraging these gray-market channels, Chinese entities are able to secure the processing power necessary to train and operate frontier-level models, despite the official policy of denial maintained by the United States.
Beyond physical smuggling, the exploitation of proxy access has emerged as a critical vulnerability in the current export control framework. Rather than relying solely on the acquisition of physical hardware, Chinese labs are increasingly utilizing proxies to gain remote access to inference and compute capabilities on restricted platforms. This digital end-run allows researchers to tap into the power of high-end chips that they are officially prohibited from purchasing or owning. Furthermore, these labs have supplemented their access to hardware with large-scale distillation attacks. By illicitly extracting knowledge from American models, these organizations have managed to build AI systems that rival the intelligence of their Western counterparts. This ability to bridge the gap through a combination of talent, clever loophole exploitation, and data extraction has created a significant divergence in opinion among industry stakeholders. While some argue that the export controls have been historically successful in slowing China’s trajectory, others point to these persistent workarounds as evidence that the current strategy is leaking.
Ultimately, the effectiveness of the export control regime hinges on the tension between the physical manufacturing limitations China faces and its agility in bypassing logistical barriers. The compute shortfall is real, and the lack of indigenous EUV and DUV capability remains a massive structural disadvantage that prevents China from competing on equal footing in hardware production. Yet, the existence of these illicit pathways means that the hardware gap does not necessarily translate into a total intelligence gap. As long as Chinese labs can utilize proxies to access compute and employ distillation techniques to mirror the performance of American models, the export controls will continue to face intense scrutiny. The debate over whether these measures are working is far from settled, as the industry watches to see if the United States can close these loopholes before the advantage in aggregate compute performance—projected for the coming years—is fully neutralized by China’s knack for circumvention. The semiconductor war is no longer just about who can build the best machine; it is about who can better manage the flow of silicon in a world where borders are increasingly porous.




