Ausnahme gefangen: SSL certificate problem: certificate is not yet valid ๐Ÿ“Œ Build AI you can trust with responsible ML

๐Ÿ  Team IT Security News

TSecurity.de ist eine Online-Plattform, die sich auf die Bereitstellung von Informationen,alle 15 Minuten neuste Nachrichten, Bildungsressourcen und Dienstleistungen rund um das Thema IT-Sicherheit spezialisiert hat.
Ob es sich um aktuelle Nachrichten, Fachartikel, Blogbeitrรคge, Webinare, Tutorials, oder Tipps & Tricks handelt, TSecurity.de bietet seinen Nutzern einen umfassenden รœberblick รผber die wichtigsten Aspekte der IT-Sicherheit in einer sich stรคndig verรคndernden digitalen Welt.

16.12.2023 - TIP: Wer den Cookie Consent Banner akzeptiert, kann z.B. von Englisch nach Deutsch รผbersetzen, erst Englisch auswรคhlen dann wieder Deutsch!

Google Android Playstore Download Button fรผr Team IT Security



๐Ÿ“š Build AI you can trust with responsible ML


๐Ÿ’ก Newskategorie: Programmierung
๐Ÿ”— Quelle: azure.microsoft.com

As AI reaches critical momentum across industries and applications, it becomes essential to ensure the safe and responsible use of AI. AI deployments are increasingly impacted by the lack of customer trust in the transparency, accountability, and fairness of these solutions. Microsoft is committed to the advancement of AI and machine learning (ML), driven by principles that put people first, and tools to enable this in practice.

In collaboration with the Aether Committee and its working groups, we are bringing the latest research in responsible AI to Azure. Letโ€™s look at how the new responsible ML capabilities in Azure Machine Learning and our open-source toolkits empower data scientists and developers to understand ML models, protect people and their data, and control the end-to-end ML process.

Responsible ML capabilities in Azure Machine Learning help developers and data scientists to understand (with interpretability and fairness), protect (with differential privacy and confidential ML) and control (with audit trail and datasheets) the end-to-end ML process.

Understand

As ML becomes deeply integrated into our daily business processes, transparency is critical. Azure Machine Learning helps you to not only understand model behavior but also assess and mitigate unfairness.

Interpret and explain model behavior

Model interpretability capabilities in Azure Machine Learning, powered by the InterpretML toolkit, enable developers and data scientists to understand model behavior and provide model explanations to business stakeholders and customers.

Use model interpretability to:

  • Build accurate ML models.
  • Understand the behavior of a wide variety of models, including deep neural networks, during both training and inferencing phases.
  • Perform what-if analysis to determine the impact on model predictions when feature values are changed.

"Azure Machine Learning helps us build AI responsibly and build trust with our customers. Using the interpretability capabilities in the fraud detection efforts for our loyalty program, we are able to understand models better, identify genuine cases of fraud, and reduce the possibility of erroneous results."ย 
โ€”Daniel Engberg, Head of Data Analytics and Artificial Intelligence, Scandinavian Airlines

Assess and mitigate model unfairness

A challenge with building AI systems today is the inability to prioritize fairness. Using Fairlearn with Azure Machine Learning, developers and data scientists can leverage specialized algorithms to ensure fairer outcomes for everyone.

Use fairness capabilities to:

  • Assess model fairness during both model training and deployment.
  • Mitigate unfairness while optimizing model performance.
  • Use interactive visualizations to compare a set of recommended models that mitigate unfairness.

โ€œAzure Machine Learning and its Fairlearn capabilities offer advanced fairness and explainability that have helped us deploy trustworthy AI solutions for our customers, while enabling stakeholder confidence and regulatory compliance.โ€ย  โ€”Alex Mohelsky, EY Canada Partner and Advisory Data, Analytic and AI Leader

Protect

ML is increasingly used in scenarios that involve sensitive information like medical patient or census data. Current practices, such as redacting or masking data, can be limiting for ML. To address this issue, differential privacy and confidential machine learning techniques can be used to help organizations build solutions while maintaining data privacy and confidentiality.

Prevent data exposure with differential privacy

Using the new WhiteNoise differential privacy toolkit with Azure Machine Learning, data science teams can build ML solutions that preserve privacy and help prevent reidentification of an individualโ€™s data. These differential privacy techniques have been developed in collaboration with researchers at Harvardโ€™s Institute for Quantitative Social Science (IQSS) and School of Engineering.

Differential privacy protects sensitive data by:

  • Injecting statistical noise in data, to help prevent disclosure of private information, without significant accuracy loss.
  • Managing exposure risk by tracking the information budget used by individual queries and limiting further queries as appropriate.

Safeguard data with confidential machine learning

In addition to data privacy, organizations are looking to ensure security and confidentiality of all ML assets.

To enable secure model training and deployment, Azure Machine Learning provides a strong set of data and networking protection capabilities. These include support for Azure Virtual Networks, private links to connect to ML workspaces, dedicated compute hosts, and customer managed keys for encryption in transit and at rest.

Building on this secure foundation, Azure Machine Learning also enables data science teams at Microsoft to build models over confidential data in a secure environment, without being able to see the data. All ML assets are kept confidential during this process. This approach is fully compatible with open source ML frameworks and a wide range of hardware options. We are excited to bring these confidential machine learning capabilities to all developers and data scientists later this year.

Control

To build responsibly, the ML development process should be repeatable, reliable, and hold stakeholders accountable. Azure Machine Learning enables decision makers, auditors, and everyone in the ML lifecycle to support a responsible process.

Track ML assets using audit trail

Azure Machine Learning provides capabilities to automatically track lineage and maintain an audit trail of ML assets. Detailsโ€”such as run history, training environment, and data and model explanationsโ€”are all captured in a central registry, allowing organizations to meet various audit requirements.

Increase accountability with model datasheets

Datasheets provide a standardized way to document ML information such as motivations, intended uses, and more. At Microsoft, we led research on datasheets, to provide transparency to data scientists, auditors and decision makers. We are also working with the Partnership on AI and leaders across industry, academia, and government to develop recommended practices and a process called ABOUT ML. The custom tags capability in Azure Machine Learning can be used to implement datasheets today and over time we will release additional features.

Start innovating responsibly

In addition to the new capabilities in Azure Machine Learning and our open-source tools, we have also developed principles for the responsible use of AI. The new responsible ML innovations and resources are designed to help developers and data scientists build more reliable, fairer, and trustworthy ML. Join us today and begin your journey with responsible ML!

Additional resources

...



๐Ÿ“Œ Build AI you can trust with responsible ML


๐Ÿ“ˆ 37.15 Punkte

๐Ÿ“Œ Responsible Technology: Responsible Tech muss inklusiv sein


๐Ÿ“ˆ 29.23 Punkte

๐Ÿ“Œ Zero Trust by Executive Order | Best Practices For Zero Trust Security You Can Takeaway From Bidenโ€™s Executive Order


๐Ÿ“ˆ 25.76 Punkte

๐Ÿ“Œ You can't always trust those mobile payment gadgets as far as you can throw them โ€“ bugs found by infosec duo


๐Ÿ“ˆ 25.61 Punkte

๐Ÿ“Œ You canโ€™t choose when youโ€™ll be hit by ransomware, but you can choose how you prepare


๐Ÿ“ˆ 24.88 Punkte

๐Ÿ“Œ Protecting Consumers and Promoting Innovation โ€“ AI Regulation and Building Trust in Responsible AI


๐Ÿ“ˆ 23.25 Punkte

๐Ÿ“Œ Meta launches Purple Llama; open trust and safety tools for responsible deployment of AI


๐Ÿ“ˆ 23.25 Punkte

๐Ÿ“Œ Clarity and Transparency: How to Build Trust for Zero Trust


๐Ÿ“ˆ 22.68 Punkte

๐Ÿ“Œ How to Build AI Systems you can Trust


๐Ÿ“ˆ 22.53 Punkte

๐Ÿ“Œ Now you know why you need to have a responsible disclosure page.


๐Ÿ“ˆ 22.52 Punkte

๐Ÿ“Œ If You Can Say It, Now You Can See It: RunWayโ€™s Latest Artificial Intelligence Tool Can Generate Videos With Nothing But Words


๐Ÿ“ˆ 21.52 Punkte

๐Ÿ“Œ Flatseal - A permissions manager for Flatpak (you can use it to deny network access to the apps you don't trust)


๐Ÿ“ˆ 21.08 Punkte

๐Ÿ“Œ AI Show Live - Episode 4 - Build Responsible AI using Error Analysis toolkit


๐Ÿ“ˆ 20.03 Punkte

๐Ÿ“Œ Build Responsible AI using Error Analysis toolkit


๐Ÿ“ˆ 20.03 Punkte

๐Ÿ“Œ Build Recap | Responsible AI Dashboard and Scorecard in Azure Machine Learning


๐Ÿ“ˆ 20.03 Punkte

๐Ÿ“Œ Gemma: The responsible way to build


๐Ÿ“ˆ 20.03 Punkte

๐Ÿ“Œ The Open Build Service now supports building Flatpak bundles, so you can build your own!


๐Ÿ“ˆ 19.31 Punkte

๐Ÿ“Œ The myth of responsible encryption: Experts say it can't work


๐Ÿ“ˆ 19.15 Punkte

๐Ÿ“Œ What the Onslow Water and Sewer Authority Can Teach About Responsible Disclosure


๐Ÿ“ˆ 19.15 Punkte

๐Ÿ“Œ Can Identity Verification Build Trust In The UKโ€™s Sharing Economy?


๐Ÿ“ˆ 18.58 Punkte

๐Ÿ“Œ What CISOs Can Do to Build Trust & Fight Fraud in the Metaverse


๐Ÿ“ˆ 18.58 Punkte

๐Ÿ“Œ What is a Responsible Disclosure Policy and Why You Need One


๐Ÿ“ˆ 18.57 Punkte

๐Ÿ“Œ What is a Responsible Disclosure Policy and Why You Need One


๐Ÿ“ˆ 18.57 Punkte

๐Ÿ“Œ How do you define Responsible AI? What tools does Microsoft provide?


๐Ÿ“ˆ 18.57 Punkte

๐Ÿ“Œ Whatโ€™s the Best Way to Build Digital Trust? Show Your Customers You Care About Their Data Privacy


๐Ÿ“ˆ 18 Punkte











matomo