GitHub to Launch ‘Secret Protection’ and ‘Code Security’ Products Soon - Related to openai, github, trump, consortium, products
GitHub to Launch ‘Secret Protection’ and ‘Code Security’ Products Soon

GitHub’s GHAS, a developer-first application testing solution built to improve code security in public and private repositories on GitHub, will now be available as two security products.
Starting April 1, the firm plans to make GitHub Advanced Security (GHAS) more accessible to developers and teams of all sizes. GHAS will be unbundled as two standalone security products, ‘Secret Protection’ and ‘Code Security’. The products can be purchased by GitHub Teams’ consumers without an enterprise license.
GitHub Secret Protection detects and prevents ‘secret leaks’ before they occur, using push protection, secret scaling, and AI-powered detection. On the other hand, GitHub Code Security helps identify and reverse vulnerabilities with code scaling, Copilot Autofix, security campaigns, and other such actions.
GitHub mentioned that development teams can adopt these two products independently, without committing to a bundled solution like GHAS. Secret Protection starts at $19 and Code Security at $30 per month per active committer.
“Historically, GitHub has taken an integrated approach to application security, embedding security attributes such as code scanning, Copilot Autofix, secret scanning, and dependency management within GitHub Advanced Security,” noted Katie Norton, research manager of DevSecOps and software supply chain security at IDC.
GitHub is also launching a free secret risk assessment for organisations starting April 1.
This helps understand ‘secret leak exposure’ across GitHub, giving administrators and developers the ability to check the exposure of sensitive data across their organisation, as well as steps to secure their environments. This feature will be available without additional costs for organisations with a GitHub Team or an Enterprise plan.
lately, the organization introduced Agent Mode for GitHub Copilot, giving its AI-powered coding assistant the ability to iterate on its own code, recognise errors, and fix them automatically.
“Today, we are infusing the power of agentic AI into the GitHub Copilot experience, elevating Copilot from pair to peer programme,” wrote GitHub CEO Thomas Dohmke on X while announcing the launch.
Ola is making a significant push into AI by developing Krutrim 3, a 700-billion parameter AI model, in partnership with Lenovo, the enterprise showcased ......
L’ère de l’Intelligence Artificielle (IA) aborde une nouvelle phase marquante, celle des solutions génératives. Dans la Drôme, cette révolution numéri......
Nvidia-backed hyperscaler AI startup CoreWeave is set to acquire Weights & Biases, a developer platform for AI. The company expects to close the acqui......
Trump axes AI staff and research funding, and scientists are worried

Ongoing Trump administration cuts to government agencies risk creating new collateral damage: the future of AI research.
On Monday, Bloomberg reported that the February layoffs at the National Science Foundation (NSF) of 170 people -- including several AI experts -- will inevitably throttle funding for AI research. Since 1950, the NSF has awarded grants that led to massive tech breakthroughs, including the algorithmic basis for Google and the building blocks for AI chatbots. The Foundation invests over $700 million annually in democratizing AI research and resources, with a focus on education, workforce development, and ethics.
Also: This 5-year tech industry forecast predicts some surprising winners - and losers.
The firings are expected to impact current research and budding AI talent in the US.
"Almost every employee with an advanced degree at every American AI firm has been a part of NSF-funded research at some point in their career," Gregory Allen, director of the Wadhwani AI Center, which focuses on national security, told Bloomberg. "Cutting those grants is robbing the future to pay the present."
The cuts leave fewer staff to award grants; Bloomberg noted that some review panels and project funding have already been halted. Similarly to impending layoffs at NIST and the AI Safety Institute, the firings impact teams created under the Chips and Science Act, which invested in domestic machine learning and manufacturing efforts.
Industry experts and former NSF employees told Bloomberg they found the move confusing given how it weakens US AI development -- despite how vocal the Trump administration has been about ramping up "America's global AI dominance." Rumors of massive budget cuts to NSF are also circulating.
Also: What TSMC's $165 billion investment in the US may mean for the chip industry.
That introduced, it's hard to tell how intentional or strategic the cuts to AI-specific staff are. As has been the case at many other government agencies, Elon Musk's Department of Government Efficiency (DOGE) targets probationary employees (who have fewer legal protections) and projects that it appears to misunderstand as DEI initiatives simply for using words like "diversity" in their program descriptions. An NSF staffer clarified to Bloomberg that, by "diversity of researchers," these projects refer to people from "different fields, states and disciplines."
Margaret Martonosi, a former NSF directorate, noted to Bloomberg that while academic institutions have other funding routes, that doesn't help "an aspiring AI expert in an arbitrary part of our country get the opportunities they need."
This morning, OpenAI introduced NextGenAI, a research consortium in partnership with 15 universities including Harvard, Duke, and the California State University system, among others. In the release, the business promised $50 million in "research grants, compute funding, and API access to support students, educators, and researchers advancing the frontiers of knowledge."
Also: 5 easy Gemini settings tweaks to protect your privacy from AI.
Just last week, OpenAI and Anthropic partnered with the US National Labs to test the companies' latest models for scientific discovery. With the recent launch of ChatGPT Gov, OpenAI's chatbot for local, state, and federal agency use, and Project Stargate, a $500 billion data center investment plan, the Trump administration appears to be shrinking existing AI infrastructure within the government while investing in partnerships with private AI companies -- a move that has already undermined government regulation and oversight, and could concentrate AI power too singularly with those companies over time.
Salesforce on Tuesday launched AgentExchange, a marketplace for Agentforce, allowing partners, developers, and the Agentblazer community to build and ......
Une nouvelle fuite de données frappe La Poste. Cela met d’ailleurs en vente près de 50 000 informations sensibles. Cette cyberattaque soulève des ques......
AI coding tools have been both a boon and a bane for software engineers and developers. While many have managed to upskill themselves with AI and use ......
OpenAI Introduces a $50M Consortium for AI Research and Education

OpenAI has presented NextGenAI, a new consortium with 15 leading research institutions to accelerate AI-driven breakthroughs in research and education.
“Excited for this-advancing research and education with AI,” wrote CEO Sam Altman on X. The firm is committing $50 million in funding, compute resources, and API access to support universities, libraries, and hospitals using AI.
The blog also shared recent research breakthroughs in AI in universities.
The Ohio State University is advancing AI in digital health, manufacturing, energy, mobility, and agriculture. Harvard University and Boston Children’s Hospital are using AI for medical diagnosis of rare diseases and improving AI alignment in healthcare. Duke University is researching metascience to identify where AI can accelerate scientific progress. Texas A&M University is training students in responsible AI use. MIT is providing API access for students to train and fine-tune AI models. Howard University is integrating AI into curricula, teaching, and university operations. The University of Oxford is digitising rare texts in the Bodleian Library. The Boston Public Library is digitising public domain materials to improve accessibility.
NextGenAI reinforces OpenAI’s academic partnerships, following the launch of ChatGPT Edu in May 2024. The initiative will equip institutions with advanced AI tools and ensure the technology benefits students, researchers, and educators worldwide.
Despite the rise of AI, SaaS companies continue to play a crucial role, as large language models (LLMs) cannot function as databases. Sridhar Vembu, f......
When OpenAI released its frontier AI models, the industry discourse quickly shifted to AI ‘eating’ startups, particularly SaaS companies. Despite thes......
GitHub’s GHAS, a developer-first application testing solution built to improve code security in public and private repositories on GitHub, will now be......
Market Impact Analysis
Market Growth Trend
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 |
---|---|---|---|---|---|---|
23.1% | 27.8% | 29.2% | 32.4% | 34.2% | 35.2% | 35.6% |
Quarterly Growth Rate
Q1 2024 | Q2 2024 | Q3 2024 | Q4 2024 |
---|---|---|---|
32.5% | 34.8% | 36.2% | 35.6% |
Market Segments and Growth Drivers
Segment | Market Share | Growth Rate |
---|---|---|
Machine Learning | 29% | 38.4% |
Computer Vision | 18% | 35.7% |
Natural Language Processing | 24% | 41.5% |
Robotics | 15% | 22.3% |
Other AI Technologies | 14% | 31.8% |
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity:
Competitive Landscape Analysis
Company | Market Share |
---|---|
Google AI | 18.3% |
Microsoft AI | 15.7% |
IBM Watson | 11.2% |
Amazon AI | 9.8% |
OpenAI | 8.4% |
Future Outlook and Predictions
The Research Github Launch landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:
Year-by-Year Technology Evolution
Based on current trajectory and expert analyses, we can project the following development timeline:
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:
Innovation Trigger
- Generative AI for specialized domains
- Blockchain for supply chain verification
Peak of Inflated Expectations
- Digital twins for business processes
- Quantum-resistant cryptography
Trough of Disillusionment
- Consumer AR/VR applications
- General-purpose blockchain
Slope of Enlightenment
- AI-driven analytics
- Edge computing
Plateau of Productivity
- Cloud infrastructure
- Mobile applications
Technology Evolution Timeline
- Improved generative models
- specialized AI applications
- AI-human collaboration systems
- multimodal AI platforms
- General AI capabilities
- AI-driven scientific breakthroughs
Expert Perspectives
Leading experts in the ai tech sector provide diverse perspectives on how the landscape will evolve over the coming years:
"The next frontier is AI systems that can reason across modalities and domains with minimal human guidance."
— AI Researcher
"Organizations that develop effective AI governance frameworks will gain competitive advantage."
— Industry Analyst
"The AI talent gap remains a critical barrier to implementation for most enterprises."
— Chief AI Officer
Areas of Expert Consensus
- Acceleration of Innovation: The pace of technological evolution will continue to increase
- Practical Integration: Focus will shift from proof-of-concept to operational deployment
- Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
- Regulatory Influence: Regulatory frameworks will increasingly shape technology development
Short-Term Outlook (1-2 Years)
In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing ai tech challenges:
- Improved generative models
- specialized AI applications
- enhanced AI ethics frameworks
These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.
Mid-Term Outlook (3-5 Years)
As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:
- AI-human collaboration systems
- multimodal AI platforms
- democratized AI development
This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.
Long-Term Outlook (5+ Years)
Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:
- General AI capabilities
- AI-driven scientific breakthroughs
- new computing paradigms
These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.
Key Risk Factors and Uncertainties
Several critical factors could significantly impact the trajectory of ai tech evolution:
Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.
Alternative Future Scenarios
The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:
Optimistic Scenario
Responsible AI driving innovation while minimizing societal disruption
Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.
Probability: 25-30%
Base Case Scenario
Incremental adoption with mixed societal impacts and ongoing ethical challenges
Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.
Probability: 50-60%
Conservative Scenario
Technical and ethical barriers creating significant implementation challenges
Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.
Probability: 15-20%
Scenario Comparison Matrix
Factor | Optimistic | Base Case | Conservative |
---|---|---|---|
Implementation Timeline | Accelerated | Steady | Delayed |
Market Adoption | Widespread | Selective | Limited |
Technology Evolution | Rapid | Progressive | Incremental |
Regulatory Environment | Supportive | Balanced | Restrictive |
Business Impact | Transformative | Significant | Modest |
Transformational Impact
Redefinition of knowledge work, automation of creative processes. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.
The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.
Implementation Challenges
Ethical concerns, computing resource limitations, talent shortages. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.
Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.
Key Innovations to Watch
Multimodal learning, resource-efficient AI, transparent decision systems. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.
Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.
Technical Glossary
Key technical terms and definitions to help understand the technologies discussed in this article.
Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.