About ( part 2 )
This section unashamedly has a TL;DR badge at its front door so if you don’t want to do a slightly deeper dive into AI and learning technolgies then it probably isn’t for you.
Like the internet and the World Wide Web did earlier Artificial Intelligence (AI) has erupted from deep in the bowels of academic computer science to spray its, at times, still molten lava and dust on the wider environment and its peoples. As with a volcanic eruption it is likely to: disrupt and destroy; create new landscapes or habitats; fertilise the land; and potentially act as a catalyst for economic growth and development.
Commercial and political opportunities are perceived, fuelling, therefore, waves of what we hope does not turn out to be Panglossian optimism by Silicon Valley ‘Tech Bros’, product marketeers, and nations’ politicians. For example, the UK’s politicians see a major role for AI solutions in improving the efficiency and productivity of that country’s National Health Service. Likewise, education is viewed as a potential major beneficiary of AI teaching and learning solutions which can assess and adapt to students’ current levels of knowledge and understanding.
It is this optimistic belief in the transformative power of technology in education which provides the thematic thrust of Learning Machine. Dada de Dada has engaged with this transformative belief since the earliest days of microcomputers and so opinions have been formed in a crucible pre-dating the public internet, World Wide Web, Microsoft Windows, Apple Computers, iPhone, WiFi, Bluetooth, YouTube, Facebook, TikTok et al.
Each new technological development excited those involved with teaching because it made what had previously been difficult easier, or it offered the chance to reach out beyond the confines of a fixed time or place for learning. For example, prior to the widespread adoption of the graphical user interface (GUI) computer operating systems, e.g. Microsoft Windows or Apple’s MacOS, copying a single document from a remote location to a local computer β even one linked to a relatively well endowed academic network β once involved typing a list of esoteric commands to be sent to the distant machine containing the desired document. The requester could then go home for the night and if they were lucky awaiting for them the next morning was the desired document or a terse message stating why it had not been transferred. Nevertheless, we were invariably impressed with this piece of ‘magic’ which is now a simple copy and paste task taking seconds to transfer a document from a remote location in ‘the cloud’.
Another piece of ‘magic’ was when Apple Computers first introduced an audio chip to their desktop range. The conference audience Dada de Dada demonstrated this stunning ability to were much impressed by this machine actually ‘speaking’.
Now we have mobile phones with facilities and functions many times more powerful than that relatively large desktop computer. We stream audio and movies without thought and the internet is omnipresent. Each one of these developments held out the promise of increasingly rich learning and teaching environments and they certainly contributed. Yet none were the panacea the early adopters and enthusiasts in the educational, political, technical and commercial worlds projected them to be.
AI is now the new kid on this particular block and so the enthusiasm cycle begins again. But is this different?
Much of the hardware and software infrastructure issues and limitations that bedeviled the past now have solutions, although not everyone in a country has easy access to them. We have online audio, video, and document standards which enable us to transfer or stream multimedia information to devices with ease. We have a multiplicity of online communication channels. All of these are increasingly, enabled by high bandwidth internet access both at home, office and, sometimes, when mobile.
The multiplicity of options for information gathering and flow means that educational institutions usually now opt for some form of standard platform in order to organise and manage this complexity. The education world has been using what are called Learning Management Systems (LMS) for many years but basically these are front-end interfaces feeding data to, and acquiring data from, backend content management systems. These LMS can be commercial, e.g. Canvas, or open-source, e.g. Moodle. The latter being used by the UK Open University and a number of other UK universities, e.g. University of Glasgow, and the University of Bath. UK Civil Service online courses also use Moodle as their LMS. These LMS can utilize modules of digital content and interactive activity (which may be static, e.g. quizes, or dynamic, e.g. online discussions) which are sometimes called Learning Objects (LO). These LO can follow various recognized international standards, e.g. SCORM or xAPI, to facilitate resuability, or sharing between different platforms. The LO can function standalone, e.g. as a tutorial, or be aggregated with other LO’s as part of a course of study. Dynamic interactions included in a LO are intended to facilitate the social aspects of learning and so student participation and contributions may be assessed by the teacher.
Adding AI to different levels of an LMS platform could lead to some interesting results, e.g. at the Learning Object level.
But …
The world of AI at the moment is based on the so-called narrow AI model, i.e. limited to specialised task and context specific knowledge that requires extensive and currently very expensive ‘pre-training’, e.g. image recognition, language translation, chess playing, and spam filtering. So, for example, as illustrated in the example below, even the AI headline acts, e.g. IBM’s Deep Blue (1997), or DeepMind’s (now Google) AlphaZero (2017) would be humbled if exposed to new tasks or contexts without that extensive and expensive retraining.
Subject domains where there are significant knowledge and research bases or codification with defined laws, rules, protocols and processes, e.g. law, medical/pharmaceutical science, and medical practice can provide the large data landscapes required for narrow AI to explore and build its patterns and associations and so produce reliable and valid outputs. Contrast this with the still highly theoretical ‘Artificial General Intelligence‘ (AGI) of human-type reasoning, i.e. multiple domain knowledge, adaptable, self-learning, and problem-solving which, if AGI was ever achieved could access knowledge from one domain and apply it to another with the capacity for generating totally unexpected results and capabilities. AGI would revolutionise, for good or ill, the whole world as we currently know it.
But for the moment our narrow AI may very successfully simulate some aspects of ‘teaching’ and may present age and level appropriate content and learning activitites in an attractively packaged way. It may also automate some or all aspects of testing and assessment of content recall and understanding. It will undoubtedly be successfully used by successful students and teachers who will then amplify their existing capabilities and tendencies.
But, and it is a very big but, AI risks disadvantaging the already struggling, the ‘being left behind’. The risk is that the gulf/divide between success and failing widens even further. The left-behind become more left-behind. Meanwhile out in wider society can be added new left-behinds. For many may find it is their jobs and careers being assessed for value and perceived efficiency in this new world where automation and its software and hardware ‘machines’ have the potential to replace the formally secure knowledge-worker, e.g. journalists, lawyers, clinicians, and … teachers.
One of the current disadvantages of even the ‘narrow’ AI now gaining traction in society is its ability to produce highly plausible and authoritative answers to questions or prompts. But these answers may be either partially or even totally incorrect. The so called AI ‘hallucination’. The ChatGPT interface even carries a warning that inaccurate responses may occur. Narrow AI builds its ‘knowledge’ through ‘pre-training’ in which it is exposed to a vast database/content archive of examples relevant to a subject area or context. Consequently, the quality of the results an an AI application or service presents as a result is dependent upon the quality of the large scale data it is exposed to in pre-training. If the content upon which AI is trained is contaminated and distorted by inaccuracy and falsehoods, e.g. acquired from the public web, then periodic hallucination is probably to be expected.
The corollary is that a quality-assured context-specific large content archives and repositories which can be used for pre-training, e.g. legal or medical, are more likely to produce high quality reliable answers. Ideally, the world of education would also have access to hallucination-free AI software and services. For now health warnings are recommended.
Sometimes such hallucination is amusing and easily spotted. Other times less so. Until recently, traditional search engines such as Google just returned lists of web sites that may contain the answers to questions, leaving the searcher to decide what sites and content within them are most relevant and useful. That type of information searching, however, has, arguably, become progressively contaminated by responses influenced by commercial priortitisation (sites paying to head search results) so that the first results returned are not necessarily the best or most appropriate.
Tradtional web searches via Google or alternative search engines, e.g. Microsoft’s Bing have, however, now responded to the presence and possibilities of OpenAI’s ChatGPT (and other AI models) by incorporating outputs from either their own AI engines (called Gemini in Google’s case β previously known as Bard) or by incorporating ChatGPT outputs into search results. Consequently, a Google search constructed as a question will now prompt a Gemini-generated response as well as its traditional pages of web sites.
Microsoft’s ‘partnership’ with OpenAI (they are their major investor) now enhances a number of their products including their search engine Bing.
We are now going to make a slight digression but the purpose will become clear and is relevant to the learning and teaching theme of the post.
AI-generated increase in potency of search engines has a dramatic environmental downside. An AI query initiates a lot of processing activity well beyond the capabilities of a humble consumer-level device and that means the processing is being undertaken in a remote and vast data centre in the so called ‘cloud’, e.g. Amazon Web Services. Millions of such queries are being processed simultaneously and that consumes electricity, which generates heat, which needs cooling, which consumes more electricity and water and so on. The user gets their results. The planet gets warmer.
On average, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search. In that difference lies a coming sea change in how the US, Europe, and the world at large will consume power β and how much that will cost. (Goldman Sachs, May 14, 2024, AI is poised to drive 160% increase in data center power demand)
Lest this Goldman Sachs assertion drive some readers back towards traditional search engines, however, then scroll back a little and re-read Dada de Dada’s description of how the dominant search engines are already incorporating AI into their product lines and so they are ALL going to be increasing their energy consumption. In fact a pure ChatGPT or alternative service query (sans search engine) may end up using proportionately less energy than the search engine/AI hybrids particularly as dedicated AI processors become more efficient, consume less power, and produce less heat.
But this digression into the environmental costs of AI also provided an unexpected illustration of pedagogical significance.
A Google query generated the response below from the company’s Gemini AI engine. What to note here is the link symbols at the end of each paragraph. These are the cited sources for the AI-generated response. Note, however, that only one of these sources can be considered an academic source generating new findings as opposed to echoing what is already in the public domain.
Entering the same query as a prompt directly into ChatGPT, however, generated the following response.
The energy consumption of a single ChatGPT query can vary based on multiple factors, including model size, hardware efficiency, data center efficiency, and the length of the response generated. While precise figures for energy use per query arenβt typically shared publicly, estimates based on similar models provide some general insights … Using information from similar models and data centers, a single ChatGPT query has been estimated to consume around 0.1 to 1 watt-hour (Wh) of energy … 0.1 Wh is roughly equivalent to keeping an LED light on for a minute or two. 1 Wh would be equivalent to using a typical LED light for about an hour.
Note how the ChatGPT response did not directly cite the sources of its information. A further prompt of: “what were the sources for your response” generated, arguably, a better quality result than that of Google’s. It is too extensive to include here but can be viewed on page 5 of this post along with the complete prompt-response sequence.
What the above illustrates is that AI searches, whether standalone or integrated within search engines, respond to a prompt by returning what appears to be an answer but as the contrast between the Google and ChatGPT results show it is AN answer, not THE answer.
Seeking answers via formulating AI prompts provides a much more satisfactory user experience because the AI app or service has apparently undertaken analysis of the prompt (in AI parlance ‘inference’), accessed its knowledge base, and then synthesised a result which is far more than just a simple list of web sites, e.g. see page 3 of this post.
On the surface such an output could be both a student’s and teacher’s dream power-tool. Or perhaps not.
For the student under assignment or homework pressure the temptations of copy and paste from AI sources may be overwhelming, bypassing the time-consuming stages of learning such as comprehension, reflection, critical analysis, problem-solving , and application to other related or unrelated tasks. Current AI is perfectly capable of crafting at least the first draft of a passable essay on many subjects.
The challenge for the teacher here is how to respond to the reality of a technology that on the one hand is such a potentially powerful tool of learning but is on the other hand is capable of distorting learning through either: β the passive consumption of misinformation and disinformation β or misrepresentation of effort, knowledge acquistion and comprehension.
Institutions of education (of all levels) need to grapple with this new reality. Some, undoubtedly, will try to strictly control or even ban the presence of AI. They will fail. For, in and beyond the school or college gates, this genie is already out of the bottle. The students will use it. The teachers will use it. The managers will use it.
Dada de Dada suggests, therefore, that it is better to have an open disclosure policy regarding what, when, how, and why AI is used. The what is how AI in the school manifests itself, e.g. applications, services, learning management systems. When it is appropriate to use it in classrooms or online. How it is being employed in the support of learning, teaching and management processes. Articulating why AI is being used (or not used). Stating what particular benefits are asserted, hoped for, or being evidenced for its use. There is considerable scope for research and pilot projects here.
There are also implications for both staff and student development not least because of the need to manage unrealistic expectations, or unjustified fears, of leaders, managers, and staff. At the coalface both teachers and students will need to learn to navigate the shallows and rapids of AI including how to craft prompts that will generate quality answers. The AI industry has already created the new professional role of ‘prompt engineer’ who can massage outputs from the black box of AI that normal mortals cannot.
In its current stage of development AI, however, cannot measure β and is indifferent to β social and family background, or physical, cognitive, and mental health constraints. All that current manifestations of AI are likely to be able to do is adjust content and assessment styles and presentations and flow, plus record and notify lack of progress to a human being with the socially designated role of teacher. A human being who has to be aware of and, hopefully, concerned with family background, or physical, cognitive, and mental health constraints. A human being who already battles daily to improve the situation for these less fortunate but is already unlikely to have solutions to problems and challenges having their genesis well beyond the school/college gates or zone of influence.
What then would AI β even if artificial general intelligence became the reality β do? It can undoubtedly automate and even improve some aspects of providing knowledge acquisition opportunities (note the emphasis) but can it fix social backgrounds, social and individual attitudes, and physical or mental conditions and constraints? The human teacher cannot reasonably be expected to work this miracle for the millions who require such fixes because that comes down to political will, ideologies, and economic/physical resources. It is. therefore, difficult to envisage how Artificial Intelligence, no matter how ‘intelligent’ can offer solutions to this very human reality. Indeed, it could easily magnify what are, in essence, human-generated values and problems.
In conclusion, as highlighted earlier the ‘narrow’ AI of today builds its knowledge from an analysis of vast quantities of hopefully quality-assured data within a subject or professional domain. The public internet certainly provides vast quantities of data β much of which is far from quality-assured. While that has, to date, provided a useful feedstock for demonstrating AI’s potential it is also possible to train AI models on private quality-assured data archives. That, however, would be a major challenge for any single organisation of any type, no matter how well endowed. A challenge also which may be well beyond the capacity of any individual school, college or university. The expense and scale of such a development may even be too great a challenge even at a national level. International collaboration would probably be necessary. A happy thought in our currently turbulent world.
If you want to view how ChatGPT responded to the original question (prompt) that acted as a catalyst for the Dada de Dada poem Learning Machine then go to page 3. Otherwise, thank you taking the time to read this far.
To share this post other people can scan the QR code below directly from your phone screen. Alternatively send the image to them via whatever is your preferred messaging system.