Andrea Korney, Vice President of Sustainability and Special Advisor for J.S. Held’s ESG & EHS Digital Solutions group, has been at the forefront of sustainability for over 25 years. Her extensive career in the energy, metals, and extraction industries has uniquely positioned her to navigate the evolving landscape of environmental responsibility and technology. In a recent interview, Korney shared her perspective on the intersection of artificial intelligence (AI) and sustainability, a topic of increasing relevance as businesses integrate AI technologies while being held to higher sustainability standards.
Korney’s path to becoming a leader in sustainability is deeply rooted in her experience in various industries, particularly in oil and gas, power generation, and heavy metals. “My career began in energy at a time when sustainability wasn’t part of the coreconversation,” she recalls. Korney learned how to navigate complex regulatory environments while advocating for more responsible business practices. “Working in oil and gas, you couldn’t avoid the environmental side,” she says, noting that from early on, she became deeply involved in ensuring that her projects complied with emerging emissions and environmental standards. Providing leadership, strategic planning, and business development support to corporations, she has developed strong alliances with trade associations, diversity organizations, tribes, and unions. Her contributions to supplier diversity and Native American engagement in energy led to recognition by Bloomberg’s Sustainability Index in 2017. Ms. Korney is an active panelist and diversity advocate, contributing to many articles and authoring sustainability whitepapers. Her commitment to North American equitable economic development is evidenced by her participation on the advisory committee for the Supplier Diversity Advisory Council, energy advisory to the Latino Coalition, and Canadian, Aboriginal, Minority Supplier Council (CAMSC) board membership.
With a background in environmental science, Korney’s expertise covers technical areas like emissions reduction, water treatment, and hazardous materials management. Her career took her across the globe, from pipeline projects in the Middle East to power generation initiatives in North America. This vast experience equipped her with a comprehensive understanding of environmental regulations and the challenges companies face in balancing profitability with sustainability.
Her experience in regulated industries also gave her a deep respect for compliance. “The oil and gas industry is highly regulated, and that helped me develop a strong foundation in navigating environmental standards,” she explains. She managed large scale projects that required detailed emissions monitoring, water treatment, and environmental impact assessments, giving her firsthand experience in dealing with sustainability challenges in complex systems. This experience is key to her current work, advising businesses on how to embed sustainability into their operations.
Korney begins by emphasizing the energy demands of AI, noting how critical it is for developers and businesses to understand the environmental impact of their AI tools. “There’s such a huge draw on energy resources,” she says, explaining that energy consumption is at the heart of sustainability challenges in the AI sector. “How that energy is used in any business is a major focus in sustainability.”
In her view, the pressure is building on companies not only from regulators but also from clients who expect AI solutions that align with environmental goals. Energy usage, whether in data centers or in developing machine learning models, is scrutinized across industries. “The mission and energy usage vary depending on where in the world you are,” Korney explains, underscoring the complexity of addressing these issues on a global scale. "We've done actual analysis on AI carbon footprints," she reveals, indicating that companies in the tech sector are already grappling with the environmental impact of their AI models. This work highlights the growing demand for tools that measure the sustainability of AI applications.
Korney’s background in highly regulated industries like oil, gas, and power generation gives her a deep understanding of the regulatory frameworks companies must navigate. "Working in oil and gas, you couldn’t avoid the environmental side,” she says,recounting her experience with pipeline projects that required extensive documentation to meet emissions targets.
For AI, these regulatory concerns are emerging. But Korney sees a parallel with other industries. “AI companies are going to face similar pressures,” she predicts. “Right now, AI tools are being adopted because they’re cost-effective, but sustainability will soon be a bigger factor.”
When discussing how companies can better track and manage their sustainability efforts, Korney points to the growing role of technology. At J.S. Held, she and her team use advanced tools to gather data across environmental sectors and the social side of ESG (Environmental, Social, and Governance). “A lot of these tools are starting to implement AI in different capacities,” she says, pointing out that technology can help companies streamline their sustainability reporting and compliance.
The reporting process itself varies depending on the industry and region. “In Europe, regulations like CSRD have real teeth,” Korney says. In contrast, U.S. regulations can be more fluid, often influenced by politics. This variance means that companies must tailor their approaches based on local regulations, which often conflict. “You have to really understand the regulatory requirements in the region and industry you’re operating in,” she advises.
Beyond energy usage, Korney highlights a second, less obvious impact of AI: its social consequences. “There’s a concern about the jobs that AI will replace,” she says, noting that certain demographics may be disproportionately affected. As AI continues to automate tasks across industries, companies must consider the social implications of reducing the need for human workers in certain roles.
For Korney, addressing these concerns requires a balanced approach, integrating AI into business operations in a way that supports both sustainability goals and social responsibility. “Companies are adopting AI because it’s cheaper, but they also need to be aware of the broader impacts,” she believes.
When asked about actionable steps businesses can take to minimize their environmental footprint, Korney is clear that green energy is key. “All types of green energy are better than fossil fuels,” she says. Implementing solutions like microgrids—localized energy grids that can operate independently of the traditional grid—can significantly reduce a company’s energy consumption. Solar generation, too, offers opportunities for companies to generate their own energy, reducing their reliance on conventional power sources.
In addition to energy solutions, Korney emphasizes the importance of rethinking infrastructure. “Decreasing your footprint through innovative building technologies is another way to mitigate your impact,” she advises. This can include using renewable energy sources like solar and wind, as well as incorporating energy-efficient designs and systems in facilities.
Another area Korney touches on is the growing market for carbon offsets—credits that companies purchase to compensate for their carbon emissions. While these offsets can be part of a broader sustainability strategy, Korney cautions against relying too heavily on them without fully understanding their limitations. “There are scams out there around carbon offsets,” she warns. Voluntary markets, where many of these credits are sold, pose higher risks because the projects are not always verified or validated.
“The regulated market is more expensive but offers more security,” she adds. Companies need to assess their risk tolerance when purchasing offsets and consider the types of projects they are supporting. “There’s concern around reforestation and deforestation credits, for example,” Korney explains. "If the trees are removed before the expected carbon sequestration period, the project may not deliver the intended impact."
Despite the challenges, Korney is optimistic about the future of AI and sustainability. She believes that companies that proactively manage their carbon footprint will gain a competitive edge in the marketplace. “If a company can say that their AI services have a lower carbon footprint than their competitors, that adds tremendous value,” she says.
For businesses, this means understanding their sustainability impact from the ground up. “At J.S. Held, we help companies write sustainability strategies that align with their growth plans,” Korney explains. The key is integrating sustainability into every aspect of the business, from product development to customer engagement.
In today’s market, companies face pressure from three primary sources: consumers, investors, and regulators. According to Korney, all three play important roles, though their influence varies by industry. “Not everyone is at the mercy of investors,” she points out. Some companies, particularly those operating business-to-business, may be more concerned with regulatory compliance than consumer demand.
However, as consumers become more educated and more vocal about sustainability, their influence is growing. “Consumers are much more knowledgeable now than they were even a few years ago,” Korney says. This shift means that companies must be prepared to meet higher expectations around sustainability if they want to maintain their market position.
Looking ahead, Korney remains bullish on the long-term impact of sustainability initiatives, even as regulations evolve. "The U.S. government is changing, and there may be some shifts," she says, but she is confident that global trends will continue to drive progress. “Consumers are more educated, and regulations in places like California will continue to push for better behavior.”
For companies navigating the AI space, the message is clear: sustainability is no longer optional. As Korney puts it, “Balancing sustainability with commercial strategy is critical.” Companies that embrace this balance will not only meet regulatory requirements but also gain the trust of consumers and investors alike.
In a rapidly changing world, where AI is transforming industries and environmental concerns are top of mind, the intersection of technology and sustainability is becoming a defining issue for businesses. With experts like Andrea Korney leading the way, there is hope that AI can be a force for both innovation and environmental responsibility.
Andrea will be a featured speaker and panelist on Arc AI’s upcoming webinar “AI & Sustainability”.
You can follow Andrea at: www.linkedin.com/in/andrea-korney-00063614
OKLAHOMA CITY – Nov 19, 2024
ARC today announced KeyGuard HE, a secure, enterprise-controlled AI solution that lets companies reap the benefits of AI without risking their intellectual property being used for training and inference by Large Language Models (LLMs) from public cloud AI solutions like ChatGPT, Gemini, Anthropic, as well as others.
With experts warning of the perils of sharing confidential or even personal information with AI chatbots, security conscious enterprises are banning employees from using public AI tools for fear of losing the intellectual property they have created, to say nothing of the resources invested.
AI expert Mike Wooldridge, a professor of AI at Oxford University, told The Guardian last year, “you should assume that anything you type into ChatGPT is just going to be fed directly into future versions of ChatGPT.” And that can be everything from computer code to trade secrets.
With KeyGuard HE, corporate users can experience the power of AI with confidence that they hold complete control over the keys to security, and that no untrusted vendor, or anyone else can ever access or view that data without permission,” said TJ Dunham, Founder & CEO of ARC. “The model can’t leak your data, as it is encrypted and customers can revoke key access at any time. KeyGuard HE empowers users to manage their private keys with full transparency and trust by using blockchain-based smart contracts to provide verifiable proof-of-actions.
ARC is a deep tech company developing the next generation of efficient AI and secure Web3 products. ARC was built on the belief that AI should work in the service of humanity, the environment it needs to exist, and being simple and transparent enough to be accessible to as many humans as possible. Founded in 2023 and headquartered Oklahoma City, Arc has significant operations in Wilmington, Del., and Zurich, Switzerland.
Mustafa Suleyman, the CEO of Microsoft AI, is no stranger to the complex landscape of artificial intelligence. As a prominent figure in the field, he has observed the rapid development of AI and remains both hopeful and cautious. In a recent interview with Steven Bartlett, Diary Of A CEO, Suleyman shared many thoughtful insights he also shared in his book “The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma”.
Speaking on the future of AI, Suleyman shares bold predictions on how AI could reshape society while also sounding alarms about the dangers of unchecked advancement. He believes that AI, if mismanaged, could lead to significant risks, from power consolidation among a few actors to a race for dominance that ignores the broader implications of safety and ethics. Suleyman’s solutions advocate for a comprehensive, cooperative approach, blending technological optimism with stringent regulatory foresight.
Predictions: The Dawn of a New Scientific Era
Suleyman envisions an era of unprecedented scientific and technological growth driven by artificial intelligence. He refers to AI as an “inflection point,” emphasizing that its capabilities will soon bring humanity to the brink of a monumental transformation. In the coming 15 to 20 years, Suleyman foresees that the power of AI to produce knowledge and scientific breakthroughs will drive a paradigm shift across industries, reshaping fields from healthcare to energy.
“We’re moving toward a world where AI can create knowledge at a marginal cost,” Suleyman says, underscoring the economic and social impact that this development could have on a global scale. According to him, the revolutionary aspect of AI lies in its potential to democratize knowledge, making intelligence and data-driven solutions accessible to “hundreds of millions, if not billions, of people.” As this accessibility increases, Suleyman predicts, societies will become “smarter, more productive, and more creative,” fueling what he describes as a true renaissance of innovation.
In this future, Suleyman envisions AI assisting with complex scientific discoveries that might have otherwise taken decades to achieve. For instance, he highlights that AI could speed up the development of drugs and vaccines, making healthcare more accessible and affordable worldwide. Beyond healthcare, he imagines a world where AI assists in reducing the high costs associated with energy production and food supply chains. “AI has the power to solve some of the biggest challenges we face today, from energy costs to sustainable food production,” he asserts. This optimistic view places AI at the heart of global problem-solving, a force that could potentially mitigate critical resource constraints and improve quality of life for millions.
Risks: Proliferation, Race Conditions, and the Misuse of Power
While Suleyman is enthusiastic about AI’s potential, he acknowledges the accompanying risks, which he describes as both immediate and far-reaching. His concerns primarily revolve around the accessibility of powerful AI tools and the potential for their misuse by malicious actors or unregulated entities. Suleyman cautions against a world where AI tools, once they reach maturity, could fall into the wrong hands. “We’re talking about technologies that can be weaponized quickly and deployed with massive impact,” he warns, emphasizing the importance of limiting access to prevent catastrophic misuse.
One of Suleyman’s significant concerns is what he calls the “race condition.” He argues that as nations and corporations realize the vast economic and strategic advantages AI offers, they may accelerate their development programs to stay ahead of competitors. This race for dominance, he suggests, mirrors the Cold War nuclear arms race, where safety often took a backseat to competitive gain. “The problem with a race condition is that it becomes self-perpetuating,” he explains. Once the competitive mindset takes hold, it becomes difficult, if not impossible, to apply the brakes. Nations and corporations may feel compelled to push forward, fearing that any hesitation could result in losing their competitive edge.
Moreover, Suleyman is concerned about how AI could consolidate power among a few key players. As the technology matures, there is a risk that control over powerful AI models will reside with a handful of corporations or nation-states. This concentration of power could result in a digital divide, where access to AI’s benefits is unevenly distributed, and those without access are left behind. Suleyman points to the potential for AI to be used not only as a tool for innovation but as a means of control, surveillance, and even repression. “If we don’t carefully consider who controls these technologies, we risk creating a world where a few actors dictate the future for all,” he warns.
Potential Scenarios of AI Misuse
Suleyman’s fears are not unfounded, given recent developments in autonomous weapon systems and AI-driven cyber-attacks. He points to scenarios where AI could enable the development of autonomous drones capable of identifying and targeting individuals without human oversight. Such capabilities, he argues, would lower the threshold for warfare, allowing conflicts to escalate quickly and with minimal accountability. “The problem with AI-driven weapons is that they reduce the cost and complexity of launching attacks, making conflict more accessible to anyone with the right tools,” Suleyman explains. The prospect of rogue states or non-state actors acquiring these tools only amplifies his concerns.
Another potential misuse of AI involves cyber warfare. Suleyman highlights that as AI-driven systems become more sophisticated, so do cyber threats. Hackers could potentially deploy AI to exploit vulnerabilities in critical infrastructure, from energy grids to financial systems, creating a digital battlefield that is increasingly difficult to defend. “AI has the potential to turn cyber warfare into something far more dangerous, where attacks can be orchestrated at a scale and speed that no human can match,” he says, advocating for a global framework to mitigate these risks.
Solutions: The Precautionary Principle and Global Cooperation
Suleyman believes that the solution to these challenges lies in adopting a precautionary approach. He advocates for slowing down AI development in certain areas until robust safety protocols and containment measures can be established. This precautionary principle, he argues, may seem counterintuitive in a world where innovation is often seen as inherently positive. However, Suleyman stresses that this approach is necessary to prevent technology from outpacing society’s ability to control it. “For the first time in history, we need to prioritize containment over innovation,” he asserts, suggesting that humanity’s survival could depend on it.
One of Suleyman’s proposals is to increase taxation on AI companies to fund societal adjustments and safety research. He argues that as AI automates jobs, there will be an urgent need for retraining programs to help workers transition to new roles. These funds could also support research into the ethical and social implications of AI, ensuring that as the technology advances, society is prepared to manage its impact. Suleyman acknowledges the potential downside—that companies might relocate to tax-favorable regions—but he believes that with proper global coordination, this risk can be mitigated. “It’s about creating a fair system that encourages responsibility over short-term profit,” he explains.
Suleyman is a strong advocate for international cooperation, especially regarding AI containment and regulation. He calls for a unified global approach to managing AI, much like the international agreements that govern nuclear technology. By establishing a set of global standards, Suleyman believes that the risks of proliferation and misuse can be minimized. “AI is a technology that transcends borders. We can’t manage it through isolated policies,” he says, underscoring the importance of a collaborative, cross-border framework that aligns the interests of multiple stakeholders.
The Role of AI Companies in Self-Regulation
In addition to international regulations, Suleyman believes that AI companies themselves have a responsibility to act ethically. He emphasizes the need for companies to build ethical frameworks within their own operations, creating internal policies that prioritize safety and transparency. Suleyman suggests that companies should implement internal review boards or ethics committees to oversee AI projects, ensuring that their potential impact is thoroughly assessed before they are deployed. “Companies need to take a proactive approach. We can’t rely solely on governments to regulate this,” he says, acknowledging that corporate self-regulation is a critical component of the broader containment strategy.
Suleyman also advocates for transparency in AI development. While he understands the competitive nature of the tech industry, he argues that certain aspects of AI research should be shared openly, particularly when it comes to safety protocols and best practices. By creating a culture of transparency, he believes that companies can foster trust among the public and reduce the likelihood of misuse. “Transparency is key. It’s the only way to ensure that AI development is held accountable,” he says, noting that companies must strike a balance between proprietary innovation and public responsibility.
Education and Public Awareness: Preparing Society for an AI-Driven Future
Suleyman is adamant that preparing society for AI’s future role requires more than just regulatory and corporate oversight—it demands public education. He argues that as AI becomes an integral part of society, people need to be informed about its capabilities, risks, and ethical considerations. Suleyman calls for educational reforms that integrate AI and digital literacy into the curriculum, enabling future generations to navigate an AI-driven world effectively. “We need to prepare people for what’s coming. This isn’t just about technology; it’s about societal transformation,” he explains.
Furthermore, Suleyman believes that fostering a culture of AI literacy will help to democratize the technology, reducing the digital divide between those who understand AI and those who don’t. He envisions a world where individuals are empowered to make informed decisions about how AI impacts their lives and work, rather than passively accepting the technology’s influence. “It’s essential that everyone—not just the tech community—understands what AI can and cannot do,” he says, advocating for broader public engagement on these issues.
A Balanced Approach to AI Development
Suleyman’s insights into the future of AI highlight the delicate balance between innovation and caution. On one hand, he is optimistic about AI’s potential to address some of humanity’s most pressing challenges, from healthcare to sustainability. On the other, he is acutely aware of the dangers that come with such powerful technology. Suleyman’s vision is one of responsible AI development, where the benefits are maximized, and the risks are carefully managed through cooperation, regulation, and public education.
As he continues to lead Microsoft AI, Suleyman remains a pivotal voice in the conversation around AI’s future. His advocacy for a precautionary approach and global cooperation serves as a reminder that while AI holds immense promise, it also comes with profound responsibilities. For Suleyman, the ultimate goal is clear: to create a world where AI not only serves humanity but does so in a way that is safe, ethical, and sustainable.
Listen to the full interview with Mustafa Suleyman on Youtube