Quick News Bit

Microsoft adds GPT-4 to its defensive suite in Security Copilot

0

The new AI security tool, which can answer questions about vulnerabilities and reverse-engineer problems, is now in preview.

Image: Adobe Stock/alvaher

AI hands are reaching further into the tech industry.

Microsoft has added Security Copilot, a natural language chatbot that can write and analyze code, to its suite of products enabled by OpenAI’s GPT-4 generative AI model. Security Copilot, which was announced on Wednesday, is now in preview for select customers. Microsoft will release more information through its email updates about when Security Copilot might become generally available.

Jump to:

What is Microsoft Security Copilot?

Microsoft Security Copilot is a natural language artificial intelligence data set that will appear as a prompt bar. This security tool will be able to:

  • Answer conversational questions such as “What are all the incidents in my enterprise?”
  • Write summaries.
  • Provide information about URLs or code snippets.
  • Point to sources for where the AI pulled its information from.

The AI is built on the OpenAI large language model, plus a security-specific model from Microsoft. That proprietary model draws from established and ongoing global threat intelligence. Enterprises already familiar with the Azure Hyperscale infrastructure line will find the same security and privacy features attached to Security Copilot.

SEE: Microsoft launches general availability of Azure OpenAI service (TechRepublic)

How does Security Copilot help IT detect, analyze and mitigate threats?

Microsoft positions Security Copilot as a way for IT departments to handle staff shortages and skills gaps. The cybersecurity field is “critically in need of more professionals,” said the International Information System Security Certification Consortium (ISC)². The worldwide gap between cybersecurity jobs and workers is 3.4 million, the consortium’s 2022 Workforce Study found.

Due to the skills gaps, organizations may look for ways to assist employees who are newer or less familiar with specific tasks. Security Copilot automates some of those tasks so security personnel can type in prompts like “look for presence of compromise” to make threat hunting easier. Users can save prompts and share prompt books with other members of their team; these prompt books record what they’ve asked the AI and how it replied.

Security Copilot can summarize an event, incident or threat and create a shareable report. It can also reverse-engineer a malicious script, explaining what the script does.

SEE: Microsoft adds Copilot AI productivity bot to 365 suite (TechRepublic)

Copilot integrates with several existing Microsoft security offerings. Microsoft Sentinel (a security information and event management tool), Defender (extended detection and response) and Intune (endpoint management and threat mitigation) can all communicate with and feed information into Security Copilot.

Microsoft reassures users that this data and the prompts you give are secure within each organization. The tech giant also creates transparent audit trails within the AI so developers can see what questions were asked and how Copilot answered them. Security Copilot data is never fed back into Microsoft’s big data lakes to train other AI models, reducing the chance for confidential information from one company to end up as an answer to a question within a different company.

Is cybersecurity run by AI safe?

While natural language AI can fill in gaps for overworked or undertrained personnel, managers and department heads should have a framework in place to keep human eyes on the work before code goes live – AI can still return false or misleading results, after all. (Microsoft has options for reporting when Security Copilot makes mistakes.)

Soo Choi-Andrews, cofounder and chief executive officer of security company Mondoo, pointed out the following concerns cybersecurity decision-makers could consider before assigning their team to use AI.

“Security teams should approach AI tools with the same rigor as they would when evaluating any new product,” Choi-Andrews said in an interview by email. “It’s essential to understand the limitations of AI, as most tools are still based on probabilistic algorithms that may not always produce accurate results … When considering AI implementation, CISOs should ask themselves whether the technology helps the business unlock revenue faster while also protecting assets and fulfilling compliance obligations.”

“As for how much AI should be used, the landscape is rapidly evolving, and there isn’t a one-size-fits-all answer,” Choi-Andrews said.

SEE: As a cybersecurity blade, ChatGPT can cut both ways (TechRepublic)

OpenAI faced a data breach on March 20, 2023. “We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history,” OpenAI wrote in a blog post on March 24, 2023. The Redis client open-source library, redis-py, has been patched.

As of today, more than 1,700 people including Elon Musk and Steve Wozniak signed a petition for AI companies like OpenAI to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” in order to “jointly develop and implement a set of shared safety protocols.” The petition was started by the Future of Life Institute, a nonprofit dedicated to using AI for good and reducing its potential for “large-scale risks” such as “militarized AI.”

Both attackers and defenders use OpenAI products

Microsoft’s main rival in the field of finding the most lucrative use for natural language AI, Google, has not yet announced a dedicated AI product for enterprise security. Microsoft announced in January 2023 that its cybersecurity arm is now a $20 billion business.

A few other companies that focus on security have tried adding OpenAI’s talkative product. ARMO, which makes the Kubescape security platform for Kubernetes, added ChatGPT to its custom controls feature in February. Orca Security added OpenAI’s GPT-3, at the time the most up-to-date model, to its cloud security platform in January to craft instructions to customers on how to remediate a problem. Skyhawk Security added the trendy AI model to its cloud threat detection and response products, too.

Instead, another loud signal here might be to those on the black hat side of the cybersecurity line. Hackers and giant corporations will continue to jostle for the most defensible digital walls and how to breach them.

“It’s important to note that AI is a double-edged sword: while it can benefit security measures, attackers are also leveraging it for their purposes,” Andrews said.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment