Yazılar

Trump-Musk Clash Triggers Scrutiny Fears Across Tesla, SpaceX, and Other Ventures

Former U.S. President Donald Trump’s call to review subsidies awarded to Elon Musk’s companies has sparked concerns of heightened regulatory scrutiny across the billionaire’s business empire, which spans automotive, space, energy, brain tech, and social media. The threat of government intervention may disrupt operations or stall innovation in several of Musk’s ventures. Here’s a breakdown of the U.S. agencies involved:

National Highway Traffic Safety Administration (NHTSA)
Tesla is under continued investigation by the NHTSA, especially concerning its advanced driver assistance systems. The agency is reviewing incidents involving Tesla’s robotaxi service in Austin, including videos showing vehicles misbehaving in traffic and in adverse weather. These inquiries extend broader probes into Tesla’s Full Self-Driving (FSD) technology, particularly related to safety during poor visibility.

Federal Communications Commission (FCC)
The FCC has begun reviewing its spectrum sharing policies, which could affect SpaceX’s Starlink satellite internet service. SpaceX is seeking new spectrum access to expand satellite coverage, but decades-old limits on signal power remain a barrier. The review could influence future Starlink deployments and broadband expansion goals.

Food and Drug Administration (FDA)
Neuralink, Musk’s brain implant startup, falls under the FDA’s oversight. After an initial rejection due to safety concerns, the FDA granted clearance for clinical trials, which are currently underway in the U.S. Neuralink is also exploring trials in Canada. The FDA will decide if Neuralink’s implants can eventually be marketed.

Environmental Protection Agency (EPA)
The EPA monitors SpaceX’s wastewater output at its Texas launch site and coordinates with other federal agencies under the National Environmental Policy Act. SpaceX’s rocket activities must pass environmental impact assessments to ensure compliance with land, water, and wildlife protection standards.

Federal Aviation Administration (FAA)
In September, the FAA proposed a $633,000 fine against SpaceX for violating licensing requirements before two 2023 launches. The FAA continues to investigate the company’s safety compliance, especially after repeated rocket explosions. Additional restrictions may follow.

Securities and Exchange Commission (SEC)
Musk is facing litigation from the SEC related to his 2022 acquisition of Twitter (now X). The agency has also probed Neuralink’s compliance and transparency, according to a December 2023 letter from Musk’s attorney, posted on X.

Federal Trade Commission (FTC)
The FTC oversees data and privacy protections at Musk’s social media platform, X. The agency is also investigating antitrust allegations, reviewing whether media watchdog groups coordinated an advertiser boycott that Musk claims is illegal.

Regulatory Risk Outlook
Trump’s renewed focus on Musk’s government support could pave the way for increased enforcement or changes to existing subsidies, affecting growth trajectories across his enterprises. With Musk already under the microscope at multiple agencies, the political escalation adds another layer of complexity.

US Implements New AI Chip Regulation to Control Global Access

The U.S. government has introduced a new regulation to restrict global access to U.S.-designed artificial intelligence (AI) chips and technology. This regulation targets the export of advanced graphics processing units (GPUs), essential for building AI models, and aims to ensure that cutting-edge AI capabilities are developed and deployed securely and in trusted environments.

Which Chips Are Restricted?

The regulation focuses on GPUs, which were initially created to accelerate graphics rendering but have become critical for AI due to their ability to process large amounts of data simultaneously. U.S. companies, particularly Nvidia, dominate the production of these chips. GPUs like Nvidia’s H100 are used extensively in training advanced AI models, such as OpenAI’s ChatGPT.

What Is the U.S. Doing?

To regulate global access, the U.S. is extending restrictions on advanced GPUs, specifically those used in AI training clusters. The new rule sets limits based on compute power, measured by Total Processing Performance (TPP). For most countries, the cap is set at 790 million TPP until 2027, equivalent to roughly 50,000 H100 GPUs. These restrictions are meant to control access to the computing power required for large-scale AI research and applications.

However, certain companies, like Amazon Web Services and Microsoft Azure, that meet the requirements for special authorizations (called “Universal Verified End User” status) are exempt from these caps. Additionally, countries with “national Verified End User” status are allowed more advanced GPUs—about 320,000 over the next two years.

Exceptions to Licensing

There are exceptions for small GPU orders, such as those for universities or research institutions. Orders that do not exceed 1,700 H100 chips only require government notification and do not count toward the caps. This exception is designed to facilitate the global flow of AI technology for low-risk purposes.

GPUs intended for gaming are also excluded from the restrictions, ensuring that the gaming sector remains unaffected by the new rules.

Which Places Can Get Unlimited AI Chips?

Eighteen countries are exempt from the country-specific caps on GPUs. These countries include the U.S., Australia, Canada, Japan, South Korea, the European Union members, and Taiwan. This list reflects nations the U.S. considers aligned in terms of AI development and security.

What Is Being Done with ‘Model Weights’?

In addition to GPUs, the U.S. is regulating “model weights,” which are numerical parameters used in training AI models. These model weights, essential for refining the performance of AI algorithms, are considered sensitive information. The new rule establishes security measures to protect these parameters, ensuring that only trusted entities manage the most advanced AI systems.

Conclusion

The U.S. regulation reflects growing concerns over AI technology’s potential misuse and aims to ensure its responsible development. By controlling the flow of critical AI resources like GPUs and model weights, the U.S. seeks to maintain dominance in the AI field while preventing sensitive technology from reaching adversarial nations.