Harnessing the potential of AI can significantly contribute to European growth and uphold European values. However, it’s equally crucial to address the challenges and risks AI may pose, ensuring effective management.
A key lesson from social media's impact is that while it can promote democracy, it can also be misused, as seen in its dual role during the Arab Spring and its subsequent weaponization. Now, with AI, we need to proactively address potential issues and ensure proper oversight alongside pursuing benefits. At Microsoft, our six ethical principles for AI, adopted in 2018, emphasize accountability as fundamental. This principle ensures that AI remains under human control and subject to effective oversight. In democratic societies, no individual, government, or company is above the law, and AI technologies must adhere to this principle. In May, Microsoft released a whitepaper, "Governing AI: A Blueprint for the Future," outlining a five-point plan for AI governance. This plan is based on years of experience and focuses on Europe’s leadership in AI regulation. Implementing Government-led AI Safety FrameworksA critical step is building on existing government frameworks to advance AI safety. The EU's AI Act and similar frameworks like the NIST AI Risk Management Framework and ISO/IEC 42001 are pivotal. Microsoft is committed to these frameworks and encourages international alignment. Effective Safety Measures for AI in Critical InfrastructureDebates around AI control over critical infrastructure are essential. Our blueprint proposes safety brakes for AI systems managing infrastructure like electrical grids and traffic flows. These systems should have built-in safety measures and regular testing to ensure human oversight and robustness. Developing a Legal and Regulatory Framework for AIIt’s vital to align legal and regulatory responsibilities with AI's technology architecture. The EU’s risk-based approach in the AI Act is a significant step, emphasizing responsible design, development, and post-market monitoring. Promoting Transparency and Access to AITransparency in AI systems and broad access to AI resources are crucial. Initiatives like the Coalition for Content Provenance Authenticity (C2PA) help enhance trust and transparency. Microsoft is committed to transparency through annual AI reports and tools for identifying AI-generated content. Public-Private Partnerships for Societal ChallengesAI’s impact on society necessitates collaboration between public and private sectors. Defensive AI technologies to protect democracy and fundamental rights, promote inclusive growth, and advance sustainability are essential. Microsoft is dedicated to these areas through concrete initiatives and partnerships. International AI GovernanceEurope's AI regulation offers a framework grounded in the rule of law. However, multilateral partnerships are needed to ensure AI governance has a global impact. The EU, US, G7, and other nations can collaborate on shared principles and voluntary standards for AI governance, promoting innovation and compliance across borders. In summary, advancing AI governance requires proactive measures, international collaboration, and adherence to ethical principles. By working together, we can ensure AI serves as a positive force for society.
0 Comments
Introduction to the Frontier Model Forum"Microsoft, Anthropic, Google, and OpenAI are proud to introduce the Frontier Model Forum. This new industry consortium is dedicated to the safe and ethical advancement of frontier AI technologies. By collectively endorsing guidelines initiated by President Biden and undertaking independent measures, these tech leaders are strengthening their commitment to responsible AI evolution."
Objectives and Vision of the Forum"The Forum is set to (i) propel AI safety research to ensure responsible development and minimize risks, (ii) pinpoint best safety practices for advanced models, (iii) facilitate knowledge exchange with key stakeholders to foster responsible AI progress; and (iv) bolster AI applications aimed at addressing major societal challenges. An Advisory Board will be set up to steer the Forum's strategic direction." Formation and Goals of the Frontier Model Forum"Formally launched on July 26, 2023, by tech giants including Anthropic, Google, Microsoft, and OpenAI, the Frontier Model Forum aims to utilize the expertise of its members to benefit the broader AI ecosystem. This includes advancing technical assessments, establishing benchmarks, and creating a public repository of resources to uphold industry best practices and standards." Core Objectives and Collaborative Efforts"The Forum is focused on advancing AI safety research, identifying best practices for the deployment of advanced models, and collaborating with diverse sectors to share insights on AI risks and trust. Additionally, the Forum is committed to developing AI solutions that tackle global challenges like climate change, health crises, and cybersecurity threats." Membership and Inclusion Criteria"Membership in the Frontier Model Forum is open to organizations that develop cutting-edge AI models and show a deep commitment to safety. Members are expected to actively participate in joint initiatives and support the overarching goals of the Forum." Strategic Actions and Future Plans"The Frontier Model Forum is set to prioritize identifying best practices, advancing AI safety research, and facilitating effective information sharing among stakeholders. These efforts aim to establish a framework for safely developing and deploying AI technologies." Leadership Statements on AI Safety and Ethics"Kent Walker of Google and Brad Smith of Microsoft highlight the importance of collaborative innovation and responsible AI development. Anna Makanju and Dario Amodei from OpenAI and Anthropic, respectively, stress the need for effective governance and safety practices to maximize AI's societal benefits." Operational Framework and Institutional Support"In the coming months, the Frontier Model Forum will establish an Advisory Board and formalize its operational structure, including governance and funding mechanisms. The Forum aims to complement and enhance existing initiatives by entities like the G7, OECD, and various industry groups. Microsoft Endorses White House's AI Guidelines, Unveils Additional PledgesToday, Microsoft is affirming its support for new voluntary guidelines put forth by the Biden-Harris administration aimed at ensuring advanced AI systems are secure, reliable, and trustworthy. Microsoft is not only endorsing the guidelines outlined by President Biden but also taking independent steps that bolster these essential objectives, thereby broadening its commitment to responsible AI deployment along with other leaders in the sector. The swift actions proposed by the White House lay the groundwork to ensure AI's potential benefits outweigh its risks.
We are encouraged by the President's initiative in rallying the tech community to define practical measures that will enhance AI's safety, security, and utility for everyone. Rooted in the core values of safety, security, and trust, these voluntary guidelines tackle the challenges posed by sophisticated AI technologies and encourage the adoption of specific strategies—such as red-team testing and releasing transparency reports. These efforts are designed to advance the entire field and build on significant existing U.S. initiatives, including:
The Call for Responsible AI: Industry Collaboration and Safety MeasuresMicrosoft's additional pledges are aimed at enhancing the ecosystem and actualizing the principles of safety, security, and trust. From supporting trials of the National AI Research Resource to promoting the creation of a national registry for high-risk AI systems, we believe these actions will promote greater transparency and accountability. We are also committed to a widespread application of the NIST AI Risk Management Framework and to adopting cybersecurity measures specifically designed to address unique AI threats, which we expect will lead to more dependable AI systems that will benefit our customers and society at large. Details on Microsoft’s commitments can be found here. Building Trustworthy AI: Specific Strategies and Existing InitiativesDeveloping commitments of this nature and implementing them at Microsoft requires a collective effort. Microsoft Backs Biden-Harris Administration's Voluntary AI GuidelinesI extend my gratitude to Kevin Scott, Microsoft’s Chief Technology Officer, with whom I co-lead our responsible AI initiative, and to Natasha Crampton, Sarah Bird, Eric Horvitz, Hanna Wallach, and Ece Kamar, for their pivotal roles in our responsible AI framework. As emphasized by the White House’s voluntary guidelines, it is crucial to keep human interests at the forefront of AI development.
Last week, I had the enriching opportunity to participate in the Responsible AI Leadership: Global Summit on Generative AI, a joint effort by the World Economic Forum and AI Commons. The summit was a vibrant platform for global leaders to unite, share insights, and forge paths towards responsible AI adoption.
During this summit and similar recent events, the recurring themes have been the importance of collaborative learning and mutual exchange of insights. I often field questions such as, “How does Microsoft implement responsible AI?” and “Are you prepared to meet the demands of this moment?” Here’s how we address these queries. At Microsoft, our approach to responsible AI involves a comprehensive framework that spans practice and culture. We operationalize these principles through rigorous governance, policy enforcement, and supportive training, while cultivating a culture where every employee is an advocate for responsible AI. In navigating the responsible AI landscape, I emphasize three critical elements: Leadership Commitment and Involvement:
Our journey in responsible AI is ongoing and humbling, requiring us to continually learn and adapt. The Responsible AI Leadership Summit reinforced the value of collective effort in this domain. We remain dedicated to transparency and sharing our learnings through various resources, including our Responsible AI Standard and customer transparency documents. As we look to the future, the potential of AI is vast and demands sustained collaboration across all sectors to ensure it serves humanity responsibly and effectively. On December 11, 2023, in Washington D.C., the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) together with Microsoft Corporation, announced a pioneering initiative. This collaboration aims to establish a comprehensive dialogue on the role of artificial intelligence (AI) in the workplace, ensuring that the development and application of AI technologies consider the perspectives and needs of workers. This initiative marks the first collaboration between a labor union and a technology firm focused specifically on AI.
Goals of the Partnership This partnership is committed to three primary objectives: (1) educating labor leaders and workers about emerging AI technologies, (2) integrating workers’ insights into AI development, and (3) influencing public policy to support the technological skills and needs of workers at the forefront of industry changes. Building on a previous neutrality agreement facilitated by the Communications Workers of America with Microsoft for video game workers, this new partnership also establishes a framework for future labor organizing within AFL-CIO affiliated unions. This agreement underscores a mutual commitment to foster positive labor-management relations and to support workforce adaptations amid rapid technological advancements. Comments from Leaders AFL-CIO President Liz Shuler noted, “This collaboration recognizes the essential role that workers play in shaping, deploying, and regulating AI and related technologies. We are eager to work alongside Microsoft to enhance the influence of workers in crafting technologies that are centered around their needs and well-being.” Microsoft’s Vice Chair and President, Brad Smith, stated, “This partnership allows us to make AI technology work in favor of America’s workforce, by integrating insights from labor leaders directly into our development process, and equipping people with the skills needed for the AI-driven future.” Strategic Initiatives
Both organizations recognize that AI has the potential to significantly enhance job roles if used to augment rather than replace human efforts. Recent surveys indicate a mixed sentiment among workers regarding AI, with many expressing concerns about job security while others see potential for AI to reduce workloads. About AFL-CIO and Microsoft The AFL-CIO is a democratic federation that represents 12.5 million workers across various sectors, advocating for better working conditions and rights. Microsoft, a global leader in digital transformation and cloud services, is also deeply involved in the development of Microsoft Dynamics solutions, including extending its services to meet specific market needs in regions like Saudi Arabia. Microsoft Dynamics Solutions in Saudi Arabia: Tailoring Technology to Local Business NeedsAs part of Microsoft's global strategy to empower every organization on the planet to achieve more, the company has been actively expanding its reach with tailored solutions that meet the unique market demands of various regions. In Saudi Arabia, Microsoft Dynamics solutions are at the forefront of this effort, providing robust, scalable tools designed to enhance business operations across multiple sectors. In the Kingdom, where economic diversification and technological advancement are national priorities, Microsoft Dynamics offers a suite of integrated, data-driven applications that help businesses automate processes and make informed decisions based on real-time data insights. From finance and operations to customer relationship management, these solutions are specifically adapted to comply with local regulations and cultural norms. Moreover, Microsoft's commitment to the Saudi market extends beyond just providing software solutions. It involves a comprehensive ecosystem of support, training, and development that helps organizations maximize their investment in technology. Partnering with local businesses and educational institutions, Microsoft facilitates numerous training and certification programs that enhance the local workforce's proficiency in using advanced technologies. This strategic focus is particularly relevant as Saudi Arabian businesses continue to grow and compete on a global scale. By integrating Microsoft Dynamics solutions, companies in the region are not only optimizing their operational efficiencies but are also gaining the agility needed to adapt to the fast-paced changes of the digital economy. In collaboration with the AFL-CIO, Microsoft is also ensuring that the workforce in Saudi Arabia is well-prepared to meet the challenges and opportunities presented by the integration of AI technologies in the workplace. Through educational initiatives and direct engagement with workers, Microsoft is setting a precedent for how technology can be used to enhance work rather than replace it, ensuring that AI serves as a tool for development and empowerment. By focusing on the specific needs of the Saudi Arabian market, Microsoft Dynamics solutions are helping pave the way for a more prosperous and technologically adept future, reinforcing Microsoft's role as a key player in the region's economic transformation. In February 2023, Microsoft unveiled an updated version of Bing, now known as Copilot in Bing. This enhanced AI-powered search tool aids users by summarizing search results and offering interactive chat capabilities, including the creation of poems, jokes, and stories. Unique to this platform is the ability to generate images using Bing Image Creator. This service incorporates sophisticated technologies such as OpenAI's GPT-4 and DALL-E, models that Microsoft and OpenAI developed into robust tools prior to their widespread release. In November 2023, this service was aptly renamed to better reflect its capabilities.
Microsoft’s Ethical AI FrameworkAt Microsoft, adherence to ethical AI practices is paramount. Copilot in Bing was developed under stringent guidelines outlined in Microsoft's AI Principles and Responsible AI Standard, with insights from the company's AI ethics board. The continuous evolution of Copilot in Bing is a testament to Microsoft's dedication to enhancing these efforts, with updates and improvements shared regularly. Technical Foundations and Safety MeasuresDefinitions:
Copilot Pro and Custom GPTsUsers with Microsoft accounts now have access to Copilot Pro, enhancing performance and providing advanced features like faster AI-driven outputs. Additionally, Copilot GPTs allow for customized interactions based on user interests, from travel to cooking, enhancing the relevancy of responses. Microsoft Dynamics Solutions in Saudi ArabiaIn line with providing advanced technology solutions globally, Microsoft also offers Microsoft Dynamics solutions in Saudi Arabia, integrating local business needs with global technology standards. This commitment ensures that businesses in Saudi Arabia have access to cutting-edge tools tailored to regional requirements. Copilot Integration in WindowsThe integration of Copilot in Bing with the Windows operating system facilitates a seamless user experience, allowing natural language commands to adjust settings and enhance functionality. This collaboration highlights Microsoft’s efforts to blend AI technology with user-centric design. How does Copilot in Bing work?Copilot in Bing represents a significant advancement in search technology, blending traditional web searching with the innovative capabilities of AI. When a user enters a query, whether it’s a simple question or a complex prompt requiring creative content, the system intelligently analyzes it. The technology foundation for this process is a combination of Microsoft's proprietary technology and OpenAI's sophisticated models, including GPT-4 for language understanding and processing, and DALL-E for visual content generation. The process begins with the user's input, which can be text, voice, or even an image. Copilot in Bing then utilizes what's known as a 'metaprompt', a guiding set of instructions that align the AI's responses with Microsoft’s ethical AI guidelines and the specific needs of the user. This metaprompt also helps the system contextualize the query using recent conversations and other relevant data. Simultaneously, Copilot in Bing pulls in top search results which act as a 'grounding' for the AI’s responses. This grounding ensures that the information provided is not only relevant but also verified against high-quality, credible sources. By leveraging these results, Copilot in Bing can generate responses that are accurate, informative, and contextually appropriate. The AI's capability to integrate these various elements seamlessly is what sets Copilot in Bing apart. It not only delivers traditional search results but also provides summaries, creative content, and interactive chat responses. This multilayered approach enhances the user experience, making information retrieval not only more efficient but also more engaging. Moreover, the integration of Copilot in Bing with Microsoft's broader ecosystem, including Windows and Microsoft Dynamics solutions, showcases its utility as a tool that supports both general users and professional environments. For instance, businesses in Saudi Arabia utilizing Microsoft Dynamics solutions can leverage Copilot in Bing for enhanced data retrieval and decision-making processes, blending localized insights with global technological prowess. In summary, Copilot in Bing works by combining cutting-edge AI with robust search technology to deliver a versatile and user-friendly search experience. Its continuous development and integration with responsible AI practices ensure that it remains a reliable and innovative tool in the evolving digital landscape. The digital age has been a revolution for creative expression. Millions leverage powerful generative AI tools to turn ideas into reality. However, as Microsoft and other tech giants introduce these tools, safeguarding them from misuse becomes a pressing concern.
History teaches us that innovation can have a dark side. Tools become weapons, and this pattern repeats with AI. Malicious actors exploit these new tools for deepfakes (AI-generated video, audio, and images). This trend threatens elections, fosters financial fraud, enables harassment, and fuels cyberbullying. Immediate action is required. Fortunately, we can learn from past experiences in cybersecurity, election security, combating online extremism, and child protection. Microsoft champions a comprehensive approach focused on six key areas to safeguard individuals and communities: Robust Security Architecture: Microsoft prioritizes safety by design. This involves a multi-layered technical approach encompassing AI platforms, models, and applications. Features include red team analysis (simulated attacks to expose vulnerabilities), preemptive content blockers, abusive prompt blocking, automated testing, and swift bans for system abusers. Data analysis plays a crucial role in this architecture. Microsoft offers a strong foundation through its Responsible AI and Digital Safety Standards, but continuous innovation is necessary as technology evolves. Enduring Media Provenance and Watermarking: This is the key to combating deepfakes. At Build 2023, Microsoft unveiled media provenance capabilities that utilize cryptographic methods to embed metadata within AI-generated content, tracing its source and history. Microsoft collaborates with industry leaders in R&D for provenance authentication methods, co-founding Project Origin and the Coalition for Content Provenance and Authenticity (C2PA). Recently, Google and Meta joined C2PA, a step Microsoft applauds. Microsoft is already integrating provenance technology into its image creation tools (Bing's Microsoft Designer and Copilot). Expanding media provenance across all image creation and manipulation tools is underway. Additionally, Microsoft actively explores watermarking and fingerprinting techniques to bolster provenance. Ultimately, users deserve the ability to quickly discern if an image or video is AI-generated or manipulated. Guarding Services from Abusive Content and Conduct: While freedom of expression is valued, it shouldn't shield individuals who use AI-generated voices to defraud others. It shouldn't protect deceptive deepfakes that sway public opinion or empower cyberbullies and distributors of nonconsensual pornography. Microsoft is committed to identifying and removing such deceptive and abusive content from its consumer services like LinkedIn, the Gaming network, and others. Industry-Wide Collaboration with Governments and Civil Society: While individual companies bear responsibility for their products, collaboration fosters a safer digital ecosystem. Microsoft seeks to work closely with others in the tech sector, especially within the generative AI and social media spheres. Proactive efforts with civil society groups and collaboration with governments are also crucial. Drawing on experiences combating online extremism (Christchurch Call), collaborating with law enforcement (Digital Crimes Unit), and protecting children (WeProtect Global Alliance), Microsoft is committed to spearheading new initiatives across the tech sector and with other stakeholders. Modernized Legislation for Public Protection: Addressing these new threats potentially necessitates new legislation and law enforcement efforts. Microsoft looks forward to contributing ideas and supporting governments' initiatives to enhance online protection while upholding core values like free speech and privacy. Public Awareness and Education: A well-informed public is our strongest defense. Just as we've learned to be skeptical of online information, we need to develop the ability to discern legitimate from AI-generated content. Public education tools and programs, developed in collaboration with civil society and leaders, are essential. In combating malicious AI-generated content, continuous effort and collaboration are essential. By prioritizing innovation and collaboration, we can ensure that technology, including Microsoft Dynamics Solutions in Saudi Arabia, empowers rather than endangers the public. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
July 2024
Categories |