Microsoft Launches Bug Bounty Program for Bing AI Services

Microsoft has launched a new bug bounty program for security researchers who find vulnerabilities in its Bing AI services and integrations. The program offers rewards ranging from $2,000 to $15,000 for qualified submissions.

Contents

What Are the Bing AI Services?

The Bing AI services are the new, innovative, AI-powered features that enhance the Bing search experience. Some of these features include:

  • Bing Chat: A conversational interface that allows users to interact with Bing using natural language queries and commands.
  • Bing Chat for Enterprise: A customized version of Bing Chat that integrates with Microsoft 365 and other enterprise applications.
  • Bing Image Creator: A tool that allows users to create and edit images using natural language instructions and AI-powered effects.
  • Bing integration in Microsoft Edge: A seamless integration that enables users to access Bing Chat and other Bing features from the Microsoft Edge browser.
  • Bing integration in Microsoft Start: A personalized news and information feed that leverages Bing’s AI capabilities to deliver relevant content and recommendations.
  • Bing integration in Skype: A feature that allows users to chat with Bing and get instant answers, information, and assistance from within the Skype mobile app.

What Are the Bing AI Services?

Microsoft launched the bug bounty program to encourage security researchers from across the globe to discover and report vulnerabilities in the Bing AI services. The program aims to enhance the security and reliability of the AI-powered Bing experience and protect the privacy and data of its customers.

According to Lynn Miyashita, MSRC Bug Bounty Community-based Defense Security, “Partnering with security researchers through our bug bounty programs is an essential part of Microsoft’s holistic strategy to protect customers from security threats. We value our partnership with the global security research community and are excited to expand our scope to include the AI-powered Bing experience.”

How Can Security Researchers Participate in the Bug Bounty Program?

Security researchers who want to participate in the bug bounty program can submit their findings through the MSRC Researcher Portal. They need to select “Bing” in the “Products” section of the vulnerability submission and include the conversation ID in the “Details to reproduce” section. To retrieve the conversation ID, they need to enter “/id” as a chat command.

The vulnerability submissions must meet the following criteria to be eligible for bounty awards:

  • Identify a vulnerability in the AI-powered Bing that was not previously reported to, or otherwise known by, Microsoft.
  • Such vulnerability must be Critical or Important severity as defined by the Microsoft Vulnerability Severity Classification for AI Systems and reproducible on the latest, fully patched version of the product or service.
  • Include clear, concise, and reproducible steps, either in writing or in video format.
  • Provide Microsoft engineers with the information necessary to quickly reproduce, understand, and fix the issue.

The bounty rewards are based on the severity and impact of the vulnerability, as well as the quality of the submission. The rewards range from $2,000 to $15,000 USD. Microsoft may accept or reject any submission at its sole discretion that it determines does not meet the above criteria.

Security researchers are also advised to create test accounts and test tenants for security testing and probing and follow the Research Rules of Engagement to avoid harm to customer data, privacy, and service availability. If in doubt, they can contact [email protected] for clarification.

Check out some of the most recent trending articles:

What Are Some Examples of Vulnerabilities in AI Systems?

Vulnerabilities in AI systems are flaws or weaknesses that can compromise the security, privacy, or functionality of an AI system or its users. Some examples of vulnerabilities in AI systems are:

  • Data poisoning: An attack that manipulates or corrupts the training data or feedback loop of an AI system to degrade its performance or cause it to produce malicious outputs.
  • Model stealing: An attack that extracts or copies the parameters or architecture of an AI model without authorization or consent.
  • Model inversion: An attack that infers sensitive information about the training data or individual inputs from an AI model’s outputs or queries.
  • Adversarial examples: An attack that crafts malicious inputs that are designed to fool an AI model into making incorrect predictions or classifications.
  • Backdoor attacks: An attack that implants a hidden trigger or functionality in an AI model that can be activated by a specific input or condition.
  • Denial-of-service attacks: An attack that prevents an AI system from functioning properly by overwhelming it with requests or inputs.

What Are Some Benefits of Securing AI Systems?

Securing AI systems is important for ensuring the trustworthiness and reliability of AI applications and services. Some benefits of securing AI systems are:

  • Protecting customer data and privacy: Securing AI systems can prevent unauthorized access, leakage, or misuse of customer data and personal information that are processed by AI systems.
  • Enhancing customer experience and satisfaction: Securing AI systems can improve the quality and accuracy of AI outputs and recommendations, as well as reduce errors and failures that can frustrate or harm customers.
  • Increasing customer loyalty and retention: Securing AI systems can increase customer confidence and trust in AI services and products, as well as foster long-term relationships and loyalty.
  • Reducing legal and reputational risks: Securing AI systems can prevent or mitigate potential lawsuits, fines, or sanctions that can result from security breaches or incidents involving AI systems.
  • Promoting ethical and responsible AI: Securing AI systems can align with the principles and values of ethical and responsible AI, such as fairness, transparency, accountability, and safety.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top