Fraud Blocker
bannerImage

Establishing healthy AI-human partnerships

ENQUIRE

Posted on June 13, 2024

Blog

Conferences

Event Management

Insights from the Australian Institute of Training and Development

Artificial intelligence is rapidly evolving and changing the way that many organisations operate. The pace of change is only accelerating and it’s often creating more questions for people than answers.

Artificial intelligence is rapidly evolving and changing the way that many organisations operate. The pace of change is only accelerating and it’s often creating more questions for people than answers.


An important goal for Cliftons is to help our clients stay ahead of industry trends so we can work in partnership to deliver remarkable events. We know AI is going to play an important role in the future – even if we don’t know exactly what that looks like right now. Recently, we hosted an event in Melbourne (with dates to come in other cities) with Neil Coulson and the Australian Institute of Training and Development to explore how organisations can establish healthy AI-human partnerships.


The Australian Institute of Training and Development (AITD) is the peak not for profit organisation for learning and development (L&D) professionals, committed to advancing excellence in the field. Included in the various ways they support the L&D community is providing a platform for their valued members to speak or write on many topics of strategic interest to the learning and development community. Amongst the AITD network is Melbourne based Neil Coulson, a data literacy and technical learning and development professional for an Australian based multinational organisation, with over a decade of experience working to advance both large-scale industry leaders and community-based not-for-profits to embrace and realise potential with new technologies.


Neil’s presentation covered the limitations and challenges of AI, as well as how we can take a human centred and ethically conscious approach to using this emerging technology. Attendees in Melbourne appreciated the insights, with the presentation generating many conversations post-session.


Here’s just some of what was covered.

Where has AI come from

While it feels like AI has exploded out of nowhere in the past 12 months, it’s actually been a process of evolution from theoretical inception to today’s practical applications. The concept of AI has been around for 60+ years, and undergone several hype cycles and disillusionment, in part to technical limitations of the time that didn't align with the predictions.


This means statements like 'general AI is 5-10 years away' claims have been mentioned since the 1960s. Today’s emerging generative AI technologies – like ChatGPT – have opened up a world of possibilities, with capabilities to learn, perceive and understand like a human being. The next expected phase will be faster data processing and analysis, with enhanced decision-making capabilities.

AI graphic

But what does this mean for humans?

While AI is exciting, Neil emphasises that there are also a lot of limitations, with human intuition and creativity remaining irreplaceable. It’s important for organisations to balance the potential of AI with reality, and an awareness of AI’s limitations.


For example, AI is only as good as the data it is trained with. As the saying goes, put garbage in and you’ll get garbage out. While an estimated $100m has been put towards training GPT-4, with 1m+ hours of YouTube video transcriptions and hundreds of gigabytes of datasets from books, websites and more, it’s expected that it will run out of data that is high-quality by 2026. This is important, because as GenAI models start consuming their own outputs, and as the quality of data decreases, the feedback loops will reinforce existing biases and incorrect facts. It can also mean that words and language that is seldom used by AI models will continue to fall further out of the vocabulary, with some words and concepts at risk of becoming endangered or disappearing from use all together.


AI also doesn’t truly ‘understand’ – it can’t deal with intuition and nuance. Neil provided numerous examples of AI hallucinations and lying, reinforcing the importance for human involvement to check the validity of AI-generated work.


For example, try asking CoPilot or ChatGPT for a random number between 1-100, and you're more likely to get the number 42 than other numbers. Neil also gave an example of how you can 'çorrect' an AI's answer with something incorrect, and it will agree with you (1+0.9 apparently can equal 1.08 if you ask ChatGPT). These are just some of the obvious ''hallucinations" that can occur, with more and more instances being shared online of these ChatGPT, Google and CoPilot AI fails.

What about the ethics of AI?

It’s not just the correctness of AI that needs to be kept in mind. Neil ran through a number of ethical considerations around the use of AI that organisations need to be mindful of.


Firstly, there are the obvious concerns of bias, transparency and consent. What data has AI been trained on and does it reinforce certain biases? Do people know how their data is being used to train AI? Are people consenting to this? These issues are still being addressed, and unfortunately a lot of the companies producing the AI models are passing that responsibility on to its users. The matter in which the models have been trained have been under increased scrutiny, as models use any data it can find, including artwork, posts, published videos and text.


Secondly, the environmental impacts need to be considered. It takes 50 gigawatt-hours to train a model, with AI having significant energy resources needed for creation, training, deployment and ongoing use. There are studies that have shown that producing a high-quality image in a GenAI tool uses as much power as fully charging a mobile phone. Neil highlighted the work of Kate Crawford and her book Atlas of AI, as a great resource for those who would like more information on the lifecycle of AI and its impacts.


There are also numerous concerns around fear and trust of AI that will need to be explored over time.

What does AI mean for jobs?

Right now, there’s a lot of hype – as well as uncertainty – about what AI means for real-world applications for industry, and the impact on both job displacements and new opportunities.


Neil emphasised that the real potential comes from using AI to augment human capabilities, using it to enhance productivity and decision-making. This includes giving AI a brief description, rationale and intended impact to keep results concise yet concrete. The best results come from an AI-in-the-loop model (rather than human-in-the-loop), where AI augments humans, but humans remain at the centre of decision-making.


This can be accomplished through categorising work into "Just Me" tasks, "Just AI" tasks, and tasks that would benefit from both. This could be having GenAI assist with brainstorming ideas, providing some critical review of work that you've created, posing as the intended audience ("review this document as if you are a CTO and ask me 10 questions about it"). It also means thinking of AI as more than GenAI. AI has hundreds of applications, from computer vision, sentiment analysis, natural language processing, automation etc. Some companies (particularly those who are already heavy data users such as finance, healthcare and cyber) would and are already benefiting from these tools. GenAI has some great uses, but is not all that AI is capable of.

So how can we help our teams prepare for AI?

As a learning and development professional, Neil believes there’s a key opportunity to prepare team members to work alongside AI, with a focus on adaptability and lifelong learning. There’s a range of skills, both technical and non-technical, that will continue to be important as organisations progress with their adoption of AI. Organisations need to make learning a strategic priority, encouraging growth mindsets and experiential learning that integrates with business goals.


It's also important to remember two key points. Firstly, AI is a mirror - it will reflect back the data and processes provided, so the first step for a lot of companies will be to make sure their data is up to the task. The second is that successful AI is not a technology-first, but people-first solution.

Cliftons x AITD Event in Melbourne

Keen to learn more? Register for updates on future events

After such positive feedback from attendees at our Melbourne event, we’re co-hosting events with AITD in other cities over the coming months. To be on the list for future sessions, register your interest here .

Cliftons Collins St Breakout Area

About Cliftons
Event Solutions

If you’re ready to start bringing your event plans to life, talk to our
team to discuss your goals.

Start with a no-obligation chat about what you want to achieve, and our team can guide you through the services we offer.

ENQUIRE NOW
bottomBanner