top of page
Search

Navigating AI’s Geopolitical Shifts with Jisoo Kim


Artificial intelligence is no longer just a tech trend - it’s shaping the global balance of power. In this episode of The AI Grapple, I sat down with Jisoo Kim, whose background in politics and national security gives her an incredibly sharp lens on the bigger picture behind AI.

Jisoo brings insight not just into how businesses use AI, but how it’s reshaping international diplomacy, national security, and our economy. We opened the conversation by unpacking why AI has become a geopolitical flashpoint. It’s not just about ChatGPT or fancy automation anymore. It's about who owns the infrastructure, the chips, the data, and how those assets shift power between nations.


The US currently leads thanks to innovation in the private sector and key supply chain control, particularly through Taiwan’s TSMC. But China is moving quickly, investing in AI at scale, stockpiling semiconductors, and positioning itself to bypass future sanctions. For Australia - and every business operating globally - this matters. Choosing which AI tool to use isn’t a neutral act. It’s often a choice between political ecosystems and economic alliances.


The Strategic Role Australia Could Play in the AI Era

Australia may be small compared to the AI powerhouses, but as Jisoo pointed out, we hold a unique and respected position in global politics. We’re a middle power with deep relationships, democratic credibility, and the opportunity to lead on responsible and ethical AI.


That leadership doesn’t come from building the biggest models or controlling the most data. It comes from helping our regional neighbours prepare for automation, from exporting our expertise in education and safety, and from supporting developing nations that could be left behind in the AI shift.


For businesses, this means thinking globally and acting ethically. Whether you’re deploying AI internally or scaling solutions into new markets, aligning with Australia’s values of fairness, privacy, and transparency will matter more than ever.


Why AI Platform Choice Is a Political Decision

Jisoo shared a strong warning about Chinese AI platforms - and how their political alignment impacts what they produce. Tools like DeepSeek and Kimi might look like just another model, but their outputs are filtered to reflect Chinese Communist Party guidelines.


That’s not a bug - it’s built in. In China, tech companies and the government are closely linked. And when you use their tools, your queries, data, and even the model’s output can be shaped by political interests. Jisoo reminded us that this is not the same as Meta collecting ad data or Google tracking browsing behaviour. It’s about control of ideas, information, and narrative.


For Australian companies - particularly those in finance, health, education, or government - this is a critical issue. Even the perception of using politically compromised AI can damage reputation, trigger compliance risks, or undermine internal trust.


Why Waiting for Regulation Is a Risky Strategy

One of the clearest themes in our conversation was the speed gap between legislation and innovation. The EU AI Act was a landmark effort, but even that is already struggling to adapt to newer AI capabilities. Meanwhile, countries like the US and UK are taking a more flexible, risk-based approach - but even that won’t be fast enough.


Jisoo’s advice? Don’t wait for regulation. By the time policies are in place, your competitors will already be well ahead. Every business - regardless of size - should set its own standards, define acceptable use cases, and start educating staff on responsible usage.


The absence of regulation isn’t an excuse to do nothing - it’s a reason to take ownership.


How Businesses Can Take Action on AI Right Now

Jisoo works with a wide range of clients - from non-profits to mining companies - and the one thing they all have in common is fear. Leaders worry they’re behind. Employees worry they’ll be replaced. And many are paralysed by the overwhelming potential of AI.


The solution isn’t complex. Start with three to five clear use cases that add value in your context. Create guardrails around them - policies, workflows, responsible practices - and let teams experiment safely. Most importantly, communicate openly. Make it clear that AI is here to amplify people, not replace them.


Companies that lead with transparency and empower their teams will build a culture that embraces AI - not one that hides from it.


Microsoft Copilot vs Open Models: What Should Businesses Choose?

We talked about the growing popularity of Microsoft Copilot, and why it’s often the go-to for enterprise. The answer is simple: trust. Copilot integrates into the Microsoft stack, has strong data governance, and feels safe - especially for organisations dealing with sensitive information.


But Jisoo also pointed out that Copilot isn’t always the most advanced. Open-source or niche models may offer better performance or customisation. The trade-off is complexity and potential risk.


This isn’t a one-size-fits-all decision. The right platform depends on your business model, your compliance needs, your tech infrastructure, and how far you want to push AI innovation. It’s not about choosing the “best” tool - it’s about choosing the right one for where you are right now.


Why Education Is the Real Unlock for AI Adoption

One of the most refreshing parts of this interview was hearing Jisoo’s passion for AI literacy. She believes, and I agree, that tools don’t unlock value - people do.


Many businesses invest heavily in licences, only to see tools go untouched because no one knows how to use them - or worse, they’re afraid to try. People worry that using AI will expose them, make them redundant, or show they don’t have the answers.


Jisoo has seen real transformation when companies flip that narrative. Run a lunch-and-learn. Encourage staff to use ChatGPT at home for small tasks. Let people explore, fail, and learn in a safe environment. When teams feel empowered and trusted, adoption skyrockets. Culture changes. And AI moves from being a threat to becoming an everyday part of how work gets done.


What Australia Could Do to Shape a Better AI Future

To close the conversation, we talked about Australia’s global responsibility. Jisoo is passionate about ensuring AI development doesn’t just benefit the “top end of town.” She wants to see Australia take the lead in the region - supporting other countries through education, partnerships, and responsible deployment.


Call centres, manufacturing, and knowledge jobs are all vulnerable to automation. If we don’t support those communities with upskilling and opportunity, we risk creating deeper inequality across the Indo-Pacific.


But we can do better. Australia is well-positioned to lead not just with technology, but with humanity. And businesses that step into that space - whether through social impact, international training, or open collaboration - will be part of something much bigger than digital transformation.


Jisoo Kim brings a rare blend of geopolitical insight and real-world business strategy. Her experience in government, defence, and now AI consulting makes her voice essential in a time when artificial intelligence is shaping everything from workplace productivity to global security.


If you’re building an AI strategy, managing risk, or simply trying to understand how this technology fits into our future - this conversation will give you the clarity and context you need.


Don’t forget to subscribe to The AI Grapple on your favourite podcast platform for more deep, meaningful conversations about the human side of artificial intelligence.


 

Connect with Jisoo Kim:

 
 
bottom of page