Guest post | James Lovelock: the scientist-inventor who transformed our view of life on Earth

Mark Maslin, UCL

James Lovelock, the maverick scientist and inventor, died surrounded by his family on July 27 2022 – his 103rd birthday. Jim led an extraordinary life. He is best known for his Gaia hypothesis, developed with the brilliant US biologist Lynn Margulis in the 1970s, which transformed the way we think of life on Earth.

Gaia challenged the orthodox view that life simply evolved and adapted to the ever-changing environment. Instead, Lovelock and Margulis argued that species not only competed but also cooperated to create the most favourable conditions for life.

Earth is a self-regulating system maintained by communities of living organisms, they claimed. These communities adjust oxygen and carbon dioxide levels in the atmosphere, salinity in the ocean and even the planet’s temperature to keep them within the acceptable bounds for life to thrive.

Just like Charles Darwin before him, Lovelock published his new, radical idea in a popular book, Gaia: A new look at life on Earth (1979). It was an instant hit that challenged mature researchers to reassess their science and encouraged new ones. As my friend and colleague Professor Richard Betts at the Met Office Hadley Centre put it:

He was a source of inspiration to me for my entire career, and in fact his first book on Gaia was a major reason why I chose to work on climate change and Earth system modelling.

Not only did the book challenge the classical Darwinism notion that life evolved and prospered through constant competition and dogged self-interest, it founded a whole new field: Earth system science. We Earth system scientists study all the interactions between the atmosphere, land, ocean, ice sheets and, of course, living things.

Lovelock also inspired the environmental movement by giving his ideas a spiritual overtone: Gaia was the goddess who personified the Earth in Greek mythology.

This antagonised many scientists, but created a lot of fruitful debate in the 1980s and 1990s. It is now generally accepted that organisms can enhance their local environment to make it more habitable. For example, forests can recycle half the moisture they receive, keeping the local climate mild and stabilising rainfall.

But the original Gaia hypothesis, that life regulates the environment so that the planet resembles an organism in its own right, is still treated with scepticism among most scientists. This is because no workable mechanism has been discovered to explain how the forces of natural selection, which operate on individual organisms, birthed the evolution of such planetary-scale homeostasis.

An aerial view of morning mist over a rainforest.
Organisms alter their environment to make it more favourable to life. Avigator Fortuner/Shutterstock

An independent scientist

There was much more to James Lovelock, who described himself as an “independent scientist since 1964”, because of the income generated from his invention of the electron capture detector while studying for a PhD in 1957.

This matchbox-sized device could measure tiny traces of toxic chemicals. It was essential in demonstrating that chlorofluorocarbons (CFCs) in the atmosphere, which originated in aerosols and refrigerators at the time, were destroying the ozone layer. It also showed that pesticide residues exist in the tissues of virtually all living creatures, from penguins in Antarctica to human breast milk.

A small device resembling a spindle with a white band in the middle.
The electron capture detector Lovelock invented for measuring air pollution. Science Museum London, CC BY-SA

The money he earned from the electron capture detector gave him his freedom because, as he was fond of telling people, the best science comes from an unfettered mind – and he hated being directed. The detector was just the start of his inventing career and he filed more than 40 patents.

He also wrote over 200 scientific papers and many popular books expanding on the Gaia hypothesis. He was awarded scientific medals, international prizes and honorary doctorates by universities all around the world.

Dr Roger Highfield, the science director at the London Science Museum, summed Jim up perfectly:

“Jim was a nonconformist who had a unique vantage point that came from being, as he put it, half-scientist and half-inventor. Endless ideas bubbled forth from this synergy between making and thinking. Although he is most associated with Gaia, he did an extraordinary range of research, from freezing hamsters to detecting life on Mars … He was more than happy to bristle a few feathers, whether by articulating his dislike of consensus views, formal education and committees, or by voicing his enthusiastic support for nuclear power.”

Jim was deeply concerned by what he saw humanity doing to the planet. In his 1995 book The Ages of Gaia, he suggested that the warm periods between ice ages, like the current Holocene, are the fevered state of our planet. Because over the last two million years the Earth has shown a clear preference for a colder average global temperature, Jim understood global warming as humanity adding to this fever.

Jim did despair at humanity’s inability to look after the environment and much of his writing reflected this, particularly his book The Revenge of Gaia in 2006. But at the age of 99, he published Novacene: The Coming Age of Hyperintelligence (2019), an optimistic view which envisaged humanity creating artificially intelligent life forms that would, unlike us, understand the importance of other living things in maintaining a habitable planet.

His dwindling faith in humanity was replaced by trust in the logic and rationality of AI. He left us with hope that cyborgs would take over and save us from ourselves.

Mark Maslin, Professor of Earth System Science, UCL

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Opening image: Photo by NASA on Unsplash

Further reading: more on protecting the planet from The Knowledge Exchange blog

Digitalisation and decarbonisation: a 2-D approach to building back greener

Across the world, two disruptive and powerful trends are taking hold: digitalisation and decarbonisation. At times, it seems as if these two forces are acting against each other, with digital technologies accelerating economic growth, but also consuming huge quantities of energy and emitting high amounts of CO2.

But it’s becoming clear that rather than competing, digitalisation and decarbonisation can work together in ways that achieve sustainable economic growth without destroying our home planet.

The net zero imperative

We’re now familiar with the evidence that global warming will do irreparable damage to the world unless we can reduce the greenhouse gases that cause it. Getting to net zero means achieving the right balance between the amount of greenhouse gas produced and the amount removed from the atmosphere.

The challenge is one not just for national governments. Businesses are facing growing regulatory, reputational and market-driven pressures to transform their business models and embrace the shift to a low-carbon, sustainable future. It’s here that digitalisation can support us on the path to net zero.

The digital possibilities

In 2020, a Green Alliance study reported that  digital technologies could have significant positive environmental impacts, including: accelerating the deployment of clean technologies and helping businesses to stop wasting energy and resources.

But the report also found that many UK businesses are still not making use of digital solutions: only 42% of UK businesses have purchased cloud computing services, compared to 65% in Finland and 56% in Denmark. The authors highlighted a number of factors explaining slower digital adoption, including lack of digital skills, concerns about cybersecurity and privacy, and underinvestment in infrastructure.

AI as an ally in the battle against climate change

Another report, published last year by PwC and Microsoft explored the potential of artificial intelligence (AI) in tackling the climate crisis. Focusing on agriculture, water, energy and transport, the report revealed numerous ways in which AI can have positive environmental and economic impacts.

  • In agriculture, AI can better monitor environmental conditions and crop yields;
  • AI-driven monitoring tools can track domestic and industrial water use, and enable suppliers to pre-empt water demand, reducing both wastage and shortages;
  • AI’s deep learning, predictive capabilities can help manage the supply and demand of renewable energy.

The report stressed that AI cannot act on its own, but will rely on multiple complementary technologies working together, including robotics, the internet of things, electric vehicles and more.

While the challenges of putting AI to work in tackling the climate crisis are great, the prizes of doing so are equally significant. The PwC/Microsoft report estimated that across the four sectors studied AI could:

  • contribute up to $5.2 trillion to the global economy in 2030;
  • reduce worldwide greenhouse gas emissions by up to 4.0% in 2030, (an amount equivalent to the 2030 annual emissions of Australia, Canada and Japan combined);
  • create up to 38.2 million net new jobs across the global economy.

Put simply, AI can enable our future systems to be more productive for the economy and for nature.

The downsides of digitalisation

As we’ve previously reported, the infrastructure that supports the digital world comes with significant energy costs and environmental impacts. From internet browsing, video and audio streaming, as well as manufacturing, shipping, and powering digital devices, digital has its own substantial carbon footprint.

The PwC/Microsoft report acknowledges that there will be trade-offs and challenges:

“For example, AI with its focus on efficiency through automation might potentially lead to ‘over exploitation’ of natural resources if not carefully guided and managed. AI, especially deep learning and quantum deep learning, could also lead to increased demand for energy, which could be counter-productive for sustainability goals, unless that energy is renewable and that electricity generation is developed hand-in-hand with application deployment.”

In addition, there is a need to ensure that all parts of the world are able to capture the benefits of digital technologies – not just the more advanced economies.

Final thoughts

Decoupling economic growth from greenhouse gas emissions is one of the biggest challenges of our lifetime. Digital technologies have enormous potential not only to achieve decarbonisation, but to improve economic performance.

As both the Green Alliance and PwC/Microsoft reports have underlined, this can be achieved by taking a joined-up approach to digitalisation and green growth. This means thinking beyond the technology to consider issues such as investing in education and training to develop the skills needed to support the growth of clean industries and digitalisation, addressing privacy concerns and supporting businesses in their drive to shrink their carbon footprints.

As we emerge from a pandemic which has inflicted great damage to economies, but which has also demonstrated the possibilities of changing longstanding habits, digitalisation is presenting us with opportunities to ensure that building back greener is more than just a slogan.


Further reading: more on climate change and technology from The Knowledge Exchange blog:

Follow us on Twitter to find out what topic areas are interesting our research team.

‘Breaking the bias’ – gender equality and the gig economy

Yesterday marked the 111th International Women’s Day, a global day of celebration for the social, economic, cultural and political achievements of women. But it is also an opportunity to reflect on and further the push towards gender equality.

While there has been much to celebrate, it has been suggested that the pandemic threatens to reverse decades of progress made towards gender equality as women have been hit harder both socially and economically than men. However, the shift in working practices during the pandemic may help to transform the future of work to the benefit of women.

There has been continued growth in the digital platform or gig economy workforce, with many women entering this type of work because of the pandemic. The gig economy has been shown to have the potential to improve gender equality in the economy, but it is not without its challenges when it comes to gender parity, as recent research has highlighted.

A platform for gender equality?

The report from the European Institute for Gender Equality (EIGE) highlights that the growth of artificial intelligence (AI) technology and platform or gig work has the potential to create new opportunities for gender equality, but at the same time can reinforce gender stereotypes, sexism and discrimination in the labour market. It found that some of the key attractions of gig work such as its flexibility, are often disadvantageous to women.

The EIGE surveyed almost 5000 workers in the platform economy across 10 countries to understand who they are, why they do platform work, and what challenges they face.  It found that:

  • a higher share of women (45%) than men (40%) among regular platform workers indicated that they worked on digital labour platforms because they were a good way to earn (additional) income;
  • flexibility, expressed as the ability to choose working hours and location, motivated about 43% of women and 35% of men;
  • a higher share of women (36%) than men (28%) said they do platform work because they can combine it with household chores and family commitments;
  • 36% of women started or restarted platform work because of the pandemic, compared to 35% of men.

The flexibility of platform work has consistently been referred to as the main motivating factor for engaging in such work. And this flexibility has been found to be more important for women, particularly in relation to family commitments. In practice, however, the research shows that flexibility is limited, with as many as 36% of women and 40% of men working at night or at the weekend, and many working hours they cannot choose.

On the plus side of the gender equality debate, it seems the gig economy is slightly less gender-segregated than the traditional labour market, with a higher share of men doing jobs usually done by women. For example, traditionally female-dominated sectors such as housekeeping and childcare are more gender-diverse in the gig economy:

  • housework (women: 54%, men: 46%)
  • childcare (women: 61%, men: 39%)
  • data entry (women: 47%, men: 53%)

But the EIGE’s survey also suggests a degree of skills mismatch and overqualification in platform work that affects women in particular. It suggests that highly educated women are more likely to do jobs that do not match their level of education, putting them at greater risk of losing their skills.

Gender bias in AI

The report also shines a light on the issue of gender bias in AI which can be a particular issue in the gig economy where such systems are frequently used.

It argues that gender bias can be embedded in AI by design, reflecting societal norms or the personal biases of those who design the systems. For example, the use of algorithms that are trained with biased data sets perpetuate historically discriminatory hiring practices which can lead to female candidates being discarded.

Platform workers can also be monitored using time-tracking software, which deducts ‘low productivity time’ from pay, increasing ‘digital wage theft’, to which women are more vulnerable.

Considering just 16% of AI professionals in the EU and UK are women – a percentage which decreases with career progression – this is something that needs to be addressed if gender parity in the gig economy, and indeed the entire modern economy, is to be achieved.

Way forward

The EIGE report welcomes new proposed EU legislation to improve the working conditions of platform workers and the EU’s proposed ‘Artificial Intelligence Act’, suggesting this shows promise when it comes to minimising the risk of bias and discrimination in AI. Also highlighted as a positive sign, is the EU’s commitment to train more specialists in AI, especially women and people from diverse backgrounds.

Nevertheless, one of the conclusions of the report is that regulations and policy discussions on platform work are largely gender blind and that action is required on multiple levels to address gender inequalities and discrimination in the gig economy.

To this end, the report recommends that the EU needs to do the following:

  • mainstream gender into the policy framework on AI-related transformation of the labour market;
  • increase the number of women in, and the diversity of, the AI workforce;
  • address the legal uncertainty in the employment status of platform workers to combat disguised employment;
  • address gender inequalities in platform work;
  • ensure that women and men platform workers can access social protection.

There are lessons here for the UK too. Perhaps the fulfilling of these actions will go some way to improving the situation by the time we get to the next International Women’s Day.


If you enjoyed this article, you may also like some of our previous posts:

Follow us on Twitter to see which topics are interesting our research officers and keep up to date with our latest blogs

Local government and artificial intelligence: the benefits and the challenges

Photo by Jackson So on Unsplash

By James Carson

Artificial intelligence (AI) has come a long way since computer pioneer Alan Turing first considered the notion of ‘thinking machines’ in the 1950s. More than half a century later, advances such as natural language processing and translation, and facial recognition have taken AI out of the computer lab and onto our smartphones. Meanwhile, faster computers and large datasets have enabled machine learning, where a computer imitates the way that humans learn.

AI has already had important impacts on how we live and work: in healthcare, it’s helping to enhance diagnosis of disease; in financial services AI is being deployed to spot trends that can’t be easily picked up by conventional reporting methods; and in education, AI can provide learning, testing and feedback, with benefits both to students and teachers. And now, intelligent automation is being adopted by local government.

AI goes local

A decade of austerity has left local councils struggling to ‘do more with less’. The Covid-19 pandemic has presented additional challenges, but has also accelerated efforts by local government to find digital solutions.

AI offers local authorities the benefits of streamlining routine tasks and processes, freeing up staff to focus on higher value activities which deliver better services and outcomes to citizens. Intelligent automation could also have important economic impacts. IPPR has estimated that AI could save councils up to £6bn in social care costs.

When it comes to system and data updating, intelligent automation really comes into its own. From managing council tax payments to issuing parking permits, there are now digital solutions to the many task-driven processes that are such a major part of local government’s work.

Many local councils are also exploring the application of chatbots or virtual assistants. These technologies enable customer services to provide automated, human-like answers to frequently asked questions on subjects as varied as waste management, street lighting and anti-social behaviour. The time and cost savings from this kind of digital solution can be substantial. Newham Council in London deployed a multilingual chatbot to answer residents’ questions. Within six months, the technology had answered 10,000 questions, saved 84 hours of call time and generated cost savings of £40,000.

The challenges of AI in local government: getting it right

Earlier this year, a report from the Oxford Commission on AI and Good Governance identified the major challenges facing local authorities when considering AI.

Inaccurate or incomplete data can delay or derail an AI project, so it’s vital that data quality issues are addressed early on. The report highlighted a project where one local authority explored how predictive analytics might be used to help prioritize inspections of houses in multiple occupation (HMOs). Predictive analytics involves the use of historic data to predict new instances. But in this case the challenges of cleaning, processing and merging the data proved too intractable to produce successful predictions.

Another important step for local authorities is to clearly define the objectives of an AI project, providing a clear vision of the outcomes, while managing expectations among all affected stakeholders – especially senior managers. The report points to a successful project implemented by Manchester City Council which developed an integrated database that allowed them to automate record searches and build predictive tools. The project had a clearly stated aim of identifying troubled families to participate in the government’s payment-by-results programme. This approach gave the project a specific focus and an easily measurable assessment of success.

It’s also important for local councils and technology suppliers to work together, ensuring that suppliers are aware of local contexts, existing data and processes. At the same time, making full use of in-house expertise can help AI technologies work better in a local government setting. The Oxford Commission report explains that after the disappointing results from the previously mentioned HMOs project, in-house data scientists working in one of the participating local authorities developed their own solution.

Sometimes, councils will discover that AI is a good fit in some parts of their work, but doesn’t work in others. In 2019, Oxford City Council explored whether chatbots could help solve design problems in some of their services. The council found that, while waste and recycling enquiries could be easily handled by a chatbot, the complex nature of the planning service would have made it difficult to remove humans from the conversations taking place in this setting. That said, another council has found it possible to develop a chatbot for its planning applications.

At the same time, digitalisation is compelling councils to adjust to new ways of working, something discussed in a Local Government Association presentation by Aylesbury Vale District Council.

The future of AI in local government

Since we last looked at this subject, local government involvement in AI has increased. But there are still important governance and ethical arrangements to consider so that AI technologies in public services can achieve benefits that citizens can trust.

The Oxford Commission report set out a number of recommendations, including:

  • minimum mandatory data standards and dedicated resources for the maintenance of data quality;
  • minimum mandatory guidance for problem definition and project progress monitoring;
  • dedicated resources to ensure that local authorities can be intelligent consumers and capable developers of AI;
  • a platform to compile all relevant information about information technology projects in local authorities.

Final thoughts

Three years ago, MJ magazine described AI as a ‘game-changer’ for local government. The potential benefits are clear. AI can generate labour and cost savings, but also offers the promise of reducing carbon footprints and optimizing energy usage. But while residents may welcome greater efficiency in their local councils, many will have concerns about data privacy, digital inclusion and trust in the use of public data.

At its best, artificial intelligence will complement the services provided by local authorities, while ensuring that the all-important element of human intelligence remains at the heart of local government.


Further reading: more on digital from The Knowledge Exchange blog

Cities on the edge: edge computing and the development of smart cities

From Barcelona to Glasgow, across the world, a trend towards making our cities “smart” has been accelerating in line with demands for cities to become more responsive to the needs of residents. In the wake of the Covid-19 pandemic, there is a newfound urgency to ensure that the places where we live are more resilient and are able to respond to changes in behaviour. For example, the need to keep a two-metre distance from people outside of your household required cities to take action to widen pavements and deploy pop-up active transport infrastructure to prevent overcrowding on public transport.

Over the past twelve months, cities across the world have taken a variety of different actions in order to support the almost overnight transition to what has been described as the “new-normal”. In the year ahead, it’s likely we will see further changes in resident behaviour, as the vaccine roll-out enables a transition out of the public health emergency and allows for the gradual reopening of society. Cities once again will have to be ready to react to changes in how people interact with their environment. However, the extent to which people will go back to pre-pandemic behaviours is not yet clear.

Not so smart cities

The ability to monitor and analyse the ways in which people interact with cities has been heralded as a key benefit of the development of smart cities, and as highlighted above, in some ways it has never been more important. However, the way in which smart city infrastructure currently collects and analyses data tends to be relatively “dumb”, in the sense that data is sent to a separate location to be analysed, rather than occurring on the device that’s collecting it.

Due to the sheer amount of data being transferred for analysis, this process can be relatively slow and is entirely dependent on the reliability and speed of a city’s overall network infrastructure. As a result, the ability to take real-time action, for example, to change traffic management systems in order to reduce congestion, is potentially limited.  

A good example of a device that acts in this way is a smart speaker, which is capable of listening out for a predetermined wake-word but is relatively incapable of doing anything else without a network connection. All other speech after a user has said the wake-word tends to be processed at a central server, Therefore, any disruption to the smart speaker’s ability to communicate with a server in the cloud will prevent it from completing the simplest of tasks.

This is why Barclays have argued that the future of smart city development will heavily rely upon a technology known as “edge computing”, which enables data analysis to be conducted closer to smart city infrastructure, rather than being sent to a distant central server.

What is edge computing?

Put simply, the concept of edge computing refers to computation that is conducted on or near a device that’s collecting data, for example, a smart traffic light. Data collected by the device is processed locally, rather than transmitted to a central server in the cloud, and decisions can be made in real-time locally on the device. Removing the need to transmit data before any action is taken facilitates real-time autonomous decision-making, which some experts argue could potentially make our cities operate more efficiently.

Additionally, as edge computing is not reliant upon a connection to a central server, there are enhanced security and data privacy protections, which will reassure citizens that collected data is safe and makes smart city infrastructure less vulnerable to attack. However, if an attacker were to breach one part of the edge computing network, it would be easy isolate affected parts of the network without comprising the entire network.

In the near future, smart city infrastructure will be vital to enabling autonomous vehicles to navigate our cities, making security of these technologies all the more important.

Cities on the edge

An example of the application of edge computing in smart city infrastructure can be seen in the development of smart CCTV cameras. According to the British Security Industry Association, there are an estimated 4 to 5.9 million CCTV cameras across the UK, one of the largest totals in the world. Each of these cameras is recording and storing a huge amount of data each day, and for the most part, this footage is largely unused and creates the need for an extensive amount of expensive storage.

Edge-enabled smart CCTV cameras could provide a solution to this issue through on-device image analytics, which are able to monitor an area in real-time and only begin recording when a pre-determined event occurs, for example, a vehicle collision. This significantly reduces the amount of footage that needs to be stored, and acts as an additional layer of privacy protection, as residents can be reassured that CCTV footage will only be stored when an incident occurs.

Additionally, edge-enabled smart CCTV cameras can also be used to identify empty parking spaces, highlight pedestrian/vehicle congestion, and help emergency services to identify the fastest route to an ongoing incident. Through the ability to identify problems in real-time, cities can become more resilient, and provide residents with information that can allow them to make better decisions.

For example, if an increased level of congestion is detected at a train station, nearby residents could be advised to select an alternative means of transport, or asked to change their journey time. This could help prevent the build-up of unnecessary congestion, and may be helpful to those who may wish to continue to avoid crowded spaces beyond the pandemic.

Final thoughts

Over the past year, the need for resilience has never been more apparent, and the way we interact with the world around us may never be the same again. The ability for cities to monitor and respond to situations in real-time will be increasingly important, as it’s not necessarily clear the extent to which residents will return to pre-pandemic behaviours.

As a result, smart city infrastructure may be more important than ever before in helping to develop resilient cities which can easily respond to resident needs. Edge computing will act as the backbone of the smart city infrastructure of the future, and enable new and exciting ways for cities to become more responsive.


If you enjoyed this article you might also like to read:

Follow us on Twitter to find out which topics are interesting our research team

Skilling up: the case for digital literacy

As technology has advanced, and it has become harder to name simple tasks that have not become digitised in some form, the need for everyone in society to have a basic level of digital skills has markedly increased. From applying for jobs to ordering a coffee via an app, digital technology has undoubtedly changed the way we all go about our day-to-day lives. For those with the appropriate digital skillset, these advances may be viewed as a positive transition to more efficiently operated services. However, for those without the necessary digital skills, there is a risk that they will struggle to access even the most basic of essential services, such as opening a bank account.

Therefore, it is of no surprise that the issue of the digital skills gap is a concern for governments and businesses alike, with a recent report by the House of Commons Science and Technology Committee highlighting that the UK could be missing out on £63 billion in lost GDP each year, due to a general lack of digital skills.

The issue of the digital skills gap has never been more pronounced, as a result of the ongoing Coronavirus pandemic, where various restrictions have required us all to embrace digital technology, in order to work, learn and socialise with our friends and family.

Digital skills at work

The extent to which technology has changed the world of work cannot be overstated, with a recent CBI report stating that the UK is the midst of a fourth industrial revolution, spurred on by advancements in automation, artificial intelligence and biotechnology. Research conducted by the CBI found that 57% of businesses say that they will need significantly more digital skills in the next five years. Therefore, the workforce of the future will need to be supported to gain these digital skills, in order to gain employment and enable British business to benefit from the digital revolution.

Concerns have been raised regarding the ability of young people to access opportunities that will support them in developing transferable digital skills. Grasping key digital skills, such as the ability to navigate Microsoft Office, is undoubtedly necessary, but is no longer enough to meet the needs of employers.  

Digital literacy: the bedrock for a fourth industrial revolution

The ability to not just be able to use digital technology, but to truly understand how it works, is known as digital literacy. A report from the House of Commons Education Select Committee sets out how crucial digital literacy will be to the success of the fourth industrial revolution. The speed with which technology is advancing and changing ensures that within just a few years, digital platforms that we use today may become outdated. Therefore, it is unwise to focus on using a single platform when developing digital skills, as inevitably the platform will either gain new functionalities or become obsolete. Instead, digital skills should be developed in a way that ensures they are future-proofed and will not go to waste when the inescapable next big technological or societal change occurs.

Why do we need digital literacy?

An example of why digital literacy is important can be seen in the way in which many of us have adapted to work from home, as a result of the Coronavirus pandemic. Restrictions on face-to-face meetings forced us to consider new ways to work collaboratively and explore the myriad of platforms that facilitate video-calling, file sharing and instant messaging. Whilst we may have already had experience using existing video-conferencing platforms, such as Skype, it was clear that each organisation had to consider using new software, such as Zoom and Google Hangouts.

Many of us would never have used these software packages before and were expected to rapidly get to grips with it in real-time, and without the usual in-person back-up networks of colleagues and IT support. This highlights the importance of digital literacy: the ability to take insight gained from interacting with one digital platform and apply it to another was vital for business continuity during the initial lockdown. The ability to transfer knowledge gained from one platform to another, is vital to ensure that we harness the opportunities of digital advancements as they occur, and without the need for lengthy additional training.

Developing digital literacy

Developing digital literacy can be difficult. Research conducted by the Nuffield Foundation found that providing access to computers in schools was not enough to encourage the development of digital literacy. Instead, FutureLab advises that computers should be embedded and used across the curriculum. Ideas put forward within FutureLab’s Digital Literacy handbook, include:

  • Support children to make mistakes when using technology, allow them to create content that may not be to the high-standard we would expect and enable them to consider how they can improve the quality of their output.
  • Provide opportunities for children to work collaboratively online, e.g create a wiki or real-time document creation via Google Docs. Use this experience to highlight how anyone can make changes online, and develop critical understanding of how what we see online may not always be entirely trustworthy.
  • Harness the power of technology by going beyond the basics. Most children will be able to conduct a simple online search, then highlight ways this can be improved and advanced through Boolean search terms. Incorporate this into a lesson that discusses the value of critically assessing the value of information.

Final thoughts

Since the widespread adoption of the internet, the way we use technology has changed at an almost frightening pace. Therefore, the digital skills we all need to interact with technology must keep up if we are to truly harness the power and potential of these new advancements.

Ensuring that we are all digitally literate will enable us to take advantage of new digital platforms effectively and could potentially lead to future economic prosperity. Developing digital literacy, will not be easy, but it will be vital to ensure the future workforce have the skills they need to gain employment and play their part within the fourth industrial revolution.



Follow us on Twitter to see which topics are interesting our research team.

Read some of our other blogs on digital skills:

Facial recognition systems: ready for prime time?

by Scott Faulds

Across the UK, it is estimated that there are 1.85 million CCTV cameras, approximately one camera for every 36 people.  From shopping centres to railway stations, CCTV cameras have become a normal part of modern life and modern policing, with research from the College of Policing indicating that CCTV modestly reduces overall crime. Currently, most of the cameras utilised within the CCTV system are passive; they act as a deterrent or provide evidence of an individual’s past location or of a crime committed.

However, advances in artificial intelligence have allowed for the development of facial recognition systems which could enable CCTV cameras to proactively identify suspects or active crime in real-time. Currently, the use of facial recognition systems in limited pilots has received a mixed reaction, with the Metropolitan Police arguing that it is their duty to use new technologies to keep people safe. But privacy campaigners argue that the technology possesses a serious threat to civil liberties and are concerned that facial recognition systems contain gender and racial bias.

How does it work?

Facial recognition systems operate in a similar way to how humans recognise faces, through identifying familiar facial characteristics, but on a much larger and data driven way. Whilst there are a variety of different types of facial recognition system, the basic steps are as follows:

An image of a face is captured either within a photograph, video or live footage. The face can be within a crowd and does not necessarily have to be directly facing a camera.

Facial recognition software biometrically scans the face and converts unique facial characteristics (distance between your eyes, distance from forehead to chin etc) into a mathematical formula known as a facial signature.

The facial signature can then be compared to faces stored within a database (such as a police watchlist) or faces previously flagged by the system.

The system then determines if it believes it has identified a match; in most systems the level of confidence required before the system flags a match can be altered.

Facial recognition and the police

Over the past twelve months, the Metropolitan Police and South Wales Police have both operated pilots of facial recognition systems, designed to identify individuals wanted for serious and violent offences. These pilots involved the placement of facial recognition cameras in central areas, such as Westfield Shopping Centre, where large crowds’ faces were scanned and compared to a police watch-list. If the system flags a match, police officers would then ask the potential match to confirm their identify and if the match was correct, they would be detained. Police forces have argued that the public broadly support the deployment of facial recognition and believe that the right balance has been found between keeping the public safe and protecting individual privacy.

The impact of the deployment of facial recognition by the police has been compared by some to the introduction of fingerprint identification. However, it is difficult to determine how successful these pilots have been, as there has been a discrepancy regarding the reporting of the accuracy of these facial recognition systems. According to the Metropolitan Police, 70% of wanted suspects would be identified walking past facial recognition cameras, whilst only one in 1,000 people would generate a false alert, an error rate of 0.1%.  Conversely, independent analysis commissioned by the Metropolitan Police, has found that only eight out of 42 matches were verified as correct, an error rate of 81%.

The massive discrepancy in error rates can be explained by the way in which you asses the accuracy of a facial recognition system. The Metropolitan Police measure accuracy by comparing successful and unsuccessful matches with the total number of faces scanned by the facial recognition system. Independent researchers, on the other hand, asses the accuracy of the flags generated by the facial recognition system. Therefore, it is unclear as to how accurate facial recognition truly is, nevertheless, the Metropolitan Police have now begun to use live facial recognition cameras operationally.

Privacy and bias

Civil liberties groups, such as Liberty and Big Brother Watch, have a raised a variety of concerns regarding the police’s use of facial recognition. These groups argue that the deployment of facial recognition systems presents a clear threat to individual privacy and privacy as a social norm. Although facial recognition systems used by the police are designed to flag those on watch-lists, every single person that comes into the range of a camera will automatically have their face biometrically scanned. In particular, privacy groups have raised concerns about the use of facial recognition systems during political protests, arguing that their use may constitute a threat to the right to freedom of expression and may even represent a breach of human rights law. 

Additionally, concerns have been raised regarding racial and gender bias that have been found to be prevalent in facial recognition systems across the world. A recent evaluative study conducted by the US Government’s National Institute of Standards and Technology on 189 facial recognition algorithms has found that most algorithms exhibit “demographic differentials”. This means that a facial recognition system’s ability to match two images of the same person varies depending on demographic group. This study found that facial recognition systems were less effective at identifying BAME and female faces, this means that these groups are statistically more likely to be falsely flagged and potentially questioned by the police.

Final thoughts

From DNA to fingerprint identification, the police are constantly looking for new and innovative ways to help keep the public safe. In theory, the use of facial recognition is no different, the police argue that the ability to quickly identify a person of interest will make the public safer. However, unlike previous advancements, the effectiveness of facial recognition is largely unproven.

Civil liberties groups are increasingly concerned that facial recognition systems may infringe on the right to privacy and worry that their use will turn the public into walking biometric ID cards. Furthermore, research has indicated that the vast majority of facial recognition systems feature racial and gender bias, this could lead to women and BAME individuals experiencing repeated contact with the police due to false matches.

In summary, facial recognition systems provide the police with a new tool to help keep the public safe. However, in order to be effective and gain the trust of the public, it will be vital for the police to set out the safeguards put in place to prevent privacy violations and the steps taken to ensure that the systems do not feature racial and gender bias.  


Follow us on Twitter to see which topics are interesting our Research Officers this week.

If you enjoyed this article you may also like to read:

Icons made by monkik from www.flaticon.com

Five blog posts that told the story of 2019

As the old year makes way for the new, it’s time to reflect on some of the topics we’ve been covering on The Knowledge Exchange blog over the past twelve months. We’ve published over 70 blog posts in 2019, covering everything from smart canals and perinatal mental health to digital prescribing and citizens’ assemblies. We can’t revisit them all, but here’s a quick look back at some of the stories that shaped our year.

Nick Youngson CC BY-SA 3.0 Alpha Stock Images

Tomorrow’s world today

Artificial Intelligence was once confined to the realms of science fiction and Hollywood movies, but it’s already beginning to have a very real impact on our personal and working lives. In February, we looked at the pioneering local authorities that are dipping a toe into the world of AI:

“In Hackney, the local council has been using AI to identify families that might benefit from additional support. The ‘Early Help Predictive System’ analyses data related to (among others) debt, domestic violence, anti-social behaviour, and school attendance, to build a profile of need for families. By taking this approach, the council believes they can intervene early and prevent the need for high cost support services.”

However, the post went on to highlight concerns about the future impact of AI on employment:

“PwC’s 2018 UK Economic Outlook suggests that 18% of public administration jobs could be lost over the next two decades. Although it’s likely many jobs will be automated, no one really knows how the job market will respond to greater AI, and whether the creation of new jobs will outnumber those lost.

Tackling violent crime

One of the most worrying trends in recent years has been the rise in violent crime. Figures released in January found overall violent crime in England and Wales had risen by 19% on the previous year.

As our blog reported in March, police forces around the country, along with health services, local government, education and the private sector have been paying close attention to the experience of Glasgow in tackling violent crime.

Glasgow’s Violence Reduction Unit (VRU) was launched in 2005, and from the start it set out to treat knife crime not just as a policing matter, but as a public health issue. In its first ten years, the VRU helped to halve the number of homicides in the city, with further progress in subsequent years.

In March, our blog explained that the VRU takes a holistic approach to its work:

“…staff from the VRU regularly go into schools and are in touch with youth organisations. They also provide key liaison individuals called “navigators” and provide additional training to people in the community, such as dentists, vets and hairdressers to help them spot and report signs of abuse or violence.”

 Protecting the blue planet

Environmental issues have always featured strongly in our blog, and in a year when people in larger numbers than ever have taken to the streets to demand greater action on climate change, we’ve reported on topics such as low emission zones, electric vehicles and deposit return schemes.

In August, we focused on the blue economy. The world’s oceans and seas are hugely important to the life of the planet, not least because they are home to an astonishing variety of biodiversity. In addition, they absorb large amounts of carbon dioxide emissions. But they are also a source of food, jobs and water – an estimated 3 billion people around the world rely on the seas and oceans for their livelihood.

Pollution is having a devastating impact on the world’s oceans, and, as our blog reported, governments are finally waking up to the need for action:

The first ever global conference on the sustainable blue economy was held last year. It concluded with hundreds of pledges to advance a sustainable blue economy, including 62 commitments related to: marine protection; plastics and waste management; maritime safety and security; fisheries development; financing; infrastructure; biodiversity and climate change; technical assistance and capacity building; private sector support; and partnerships. 

Sir Harry Burns
Image: Jason Kimmings

A sense of place

The ties that bind environmental factors, health and wellbeing are becoming increasingly clear. This was underlined at an international conference in June on the importance of place-based approaches to improving health and reducing inequalities.

One of the speakers was Sir Harry Burns, Director of Global Public Health at the University of Strathclyde. His research supports the idea that poverty is not the result of bad choices, but rather the absence of a sense of coherence and purpose that people need to make good choices:

“People who have a sense of purpose, control and self-esteem are more positive and secure about the places they live in, and a greater ability to make the right choices. Ask people to take control of their lives, build their trust, and people can make choices that support their health. We must create places that do that”.

Celebrating diversity

While it sometimes seems as if our society has made great strides in stamping out prejudice and supporting minority groups, at other times the stark reality of discrimination can shine a light on how far we still have to go.

In June, we marked Gypsy, Roma and Traveller (GRT) History Month with two blog posts that aimed to raise awareness of the many issues faced by GRT communities in the UK today:

“Research by Travellers Movement has found that four out of five (77%) of Gypsies, Roma and Travellers have experienced hate speech or a hate crime – ranging from regularly being subject to racist abuse in public to physical assaults. There is also evidence of discrimination against GRT individuals by the media, police, teachers, employers and other public services.”

But our blog also highlighted work being done to address these issues and to spread the word about GRT communities’ rich cultural heritage:

“Today, organisations and individuals such as The Traveller MovementFriends, Families and Travellers, and Scottish Traveller activist Davie Donaldson strive to promote awareness of and equality for the GRT community. The recent Tobar an Keir festival held by the Elphinstone Institute at Aberdeen University sought to illustrate traditional Traveller’s skills such as peg-making.”

 Back to the future

Since first launching in 2014, The Knowledge Exchange blog has published more than 700 posts, covering topics as varied as health and planning, education and digital, the arts, disabilities, work and transport.

The key issues of our times – climate change, Brexit and the economy haven’t been neglected by our blog, but we’ve looked at them in the context of specific topics such as air pollution, higher education and diversity and inclusion in the workplace.

As we head into a new year, the aims of The Knowledge Exchange blog remain: to raise awareness of issues, problems, solutions and research in public policy and practice.

We wish all our readers a very Happy Christmas, and a peaceful, prosperous and healthy 2020.

‘Digital prescribing’ – could tech provide the solution to loneliness in older people?

Notruf und Hilfe für Rentner und Kranke

The number of over-50s experiencing loneliness could reach two million by 2026. This compares to around 1.4 million in 2016/7 – a 49% increase in 10 years.

It has also been estimated that around 1.5 million people aged 50 and over are ‘chronically lonely.’

With an ageing population and increasing life expectancy, it would seem likely that loneliness among older people is set to continue; unless something significant is done. According to Age UK, tackling loneliness requires more than social activities. A new report from Vodafone suggests technology could be the answer.

Impact

The impact of loneliness in older people can be immense, not only for the older people themselves but for those around them. It can also put strain on the NHS, employers and organisations providing support to people who are lonely; and have a negative impact on growth and living standards.

Research has suggested that those experiencing social isolation and loneliness are at increased risk of developing health conditions such as dementia and depression, as well as increased risk of mortality. The damaging health effect of loneliness has been shown to be comparable to smoking 15 cigarettes a day. Older people who are lonely are therefore more likely to use health services than those who are never lonely.

The economic impact is also significant. It has been estimated that increases in service usage create a cost to the public sector of an average £12,000 per person over the medium term (15 years). Vodafone’s report suggests that loneliness has a £1 billion a year impact on public services. It has also been found to cost employers £2.5 billion per year.

How tech can ease the burden

According to Vodafone, “new technologies are a key part of the solution” alongside more traditional public and community services. Two key routes through which technology can be used to reduce loneliness are highlighted:

  • by supporting older people to remain independent in their home and community; and
  • maintaining and building networks and contacts.

From wearable devices and touchscreens to personal robots that act as the eyes, ears and voice of people unable to present physically, these are all highlighted as viable and positive uses of tech to ease the burden of loneliness. And there are already a number of examples of innovative use of technology that can benefit older people.

1024px-AV1

No Isolation AV1 robot. Image by Mats Hartvig Abrahamsen, via CC BY-SA 4.0

Good practice examples

One such example is Vodafone’s smart wearable wristband, the V-SOS Band, which supports independent living while also increasing the wearer’s safety. It can directly alert family members via their phone if the wearer needs help. It also uses fall detection technology so that families can be alerted automatically if the wearer falls either in the home or when they are out.

Kraydel is another example. Its smart TV-top hub links elderly people to their carers or family members, through their TV screens, helping people be more independent and remain in their own homes for longer as well as helping them be more socially connected. It provides for user-friendly video calling via the TV and can help people return home from hospital earlier. Via connection to the cloud, the device interprets the data it receives to build up a picture of the user’s daily activities, health and wellbeing. It issues medicine and diary reminders, and alerts caregivers if it sees something amiss, or identifies potential risk.

Although aimed at children, No Isolation’s AV1 – a smart robot designed to reduce the risks of children and young adults with long-term illness becoming socially isolated – demonstrates the positive impact innovative technology can have on social isolation and loneliness. The robot avatar, with its 360 degree camera, acts as the child’s eyes, ears and voice in the classroom or at other events, keeping children closely involved with school and in touch with their friends.

Of course, loneliness is particularly prevalent among people who don’t use smart technology such as smart phones and tablets, one of the reasons cited by Kraydel for using the TV – probably the most familiar and widely used screen globally. This issue also led No Isolation to develop KOMP, a communication device for seniors that requires no prior digital skills. It enables users to receive photos, messages and video calls from their children and grandchildren, operated by one single button.

Another new project recently launched in Sweden – considered one of the world’s loneliest countries – uses a unique conversational artificial intelligence which enables older people to capture life stories for future generations while providing companionship. Memory Lane works with Google Voice Assistant and is able to hold meaningful conversations in as human a way as possible. A pilot test showed that the software “instantly sparked intimate conversations” and led to stories that hadn’t been told before.

Final thoughts

With a significant number of older people lacking confidence in their ability to use technology for essential online activities, support for digital skills is obviously still important. In response to this issue, Vodafone has launched free masterclasses across the UK, as part of a programme called TechConnect.

Many of the above innovative examples bypass the traditional barriers to realising the potential of technology in reducing loneliness as most:

  • don’t rely on older people engaging directly with the technology; and
  • are based on mobile technology that can be constantly connected, whether inside or outside the home.

However, there is still the issue of awareness of such technologies and their accessibility to older people. The Vodafone report suggests that access could be improved through social and digital prescribing and revitalising support for independent living, and calls for a challenge fund to support innovation. It is suggested that these innovative ideas are just the start and that combined action is needed from across all levels of government, business and community groups, amongst others.

Perhaps if such action is taken to address existing barriers, we will see a reverse in the loneliness trend over the next 10 years.


Follow us on Twitter to see what developments in public and social policy are interesting our research team.

Smart cities aim to make urban life more efficient – but for citizens’ sake they need to slow down

Sometimes you want to take it slow. Fabrizio Verrecchia/Unsplash. , FAL

Guest post by Lakshmi Priya Rajendran, Anglia Ruskin University

All over the world, governments, institutions and businesses are combining technologies for gathering data, enhancing communications and sharing information, with urban infrastructure, to create smart cities. One of the main goals of these efforts is to make city living more efficient and productive – in other words, to speed things up.

Yet for citizens, this growing addiction to speed can be confounding. Unlike businesses or services, citizens don’t always need to be fast to be productive. Several research initiatives show that cities have to be “liveable” to foster well-being and productivity. So, quality of life in smart cities should not be associated with speed and efficiency alone.

The pace of city life is determined by many factors, such as people’s emotions or memories, the built environment, the speed of movement and by the technologies that connect people to – or detach them from – any given place. As cities around the world become increasingly “smart”, I argue that – amid the optimised encounters and experiences – there also need to be slow moments, when people can mindfully engage with and enjoy the city.

Cities provide an environment for people to move, encounter, communicate and explore spaces. Research shows how these experiences can differ, depending on the pace of the activity and the urban environment: whether fast or slow, restless or calm, spontaneous or considered.

“Slow” approaches have been introduced as an antidote to many unhealthy or superficial aspects of modern life. For example, the slow reading movement encourages readers to take time to concentrate, contemplate and immerse themselves in what they’re reading – rather than skim reading and scrolling rapidly through short texts.

Similarly, the international slow food movement started in Italy as a protest against the opening of a McDonald’s restaurant on the Spanish Steps in Rome, back in 1986. Then, in 1999, came the “cittaslow movement” (translated as “slow city”) – inspired by the slow food movement – which emphasises the importance of maintaining local character while developing an economy which can sustain communities into the future.

Orvieto, Italy – home of the cittaslow movement. Shutterstock. 
Slow cities arise from grassroots efforts to improve quality of life for citizens, by reducing pollution, traffic and crowds and promoting better social interaction within communities. They must follow a detailed set of policy guidelines, which focus on providing green space, accessible infrastructure and internet connectivity, promoting renewable energy and sustainable transport, and being welcoming and friendly to all. Slow cities can create opportunities for healthier behavioural patterns – including pausing or slowing down – which allow for more meaningful engagement in cities.

These guidelines present a clear road map for city governments, but there are also ways that local people can promote a slow city ethos in fast-paced cities throughout the world. For example, in London, artists and activists have organised slow walks to encourage the general public to meaningfully engage with urban spaces, and show them how diverse their experiences of the city can be, depending on the speed of movement.

Slow and smart

Trying to put people’s concerns at the heart of smart city policies has always been challenging, due to the lack of creative grassroots approaches, which enable citizens to participate and engage with planning. And while technology has been able to give citizens instant access to a wide range of data about a place, it is rarely used to improve their actual experience of that place.

Getting smart cities to slow down could give citizens the means to explore the urban environment at a range of different paces, each offering a distinctive experience. To do this, architects, artists and urban planners need to look beyond the ways that technology can give instant access to information, services and entertainment – whether that’s video game lounges, or recharging and navigation pods in airports and stations.

Instead, they must recognise that technology can create platforms for citizens to immerse themselves and engage meaningfully in different experiences within the urban environment. For example, technology-based installations or projections can tell stories about people and places from other times, which enrich people’s experience of the city. Artificial Intelligence and machine learning can offer new ways to understand cities, and the way people function within them, which could help give human behaviour and experience a significant place in smart city planning.

Slow and smart cities could take the best of both approaches, helping citizens to connect with the history, present and future of a place, emphasising local character and building a sense of community, while also making use of the latest technology to give people greater choice about whether they want to speed up or slow down.

This would not only enhance efficiency and productivity, but also ensure that technology actively helps to improve people’s quality of life and make cities better places to live. It may sound idealistic, but with the range of advanced technology already being developed, ensuring cities are slow as well as smart could help people live better, more meaningful lives long into the future.The Conversation


Guest post by Lakshmi Priya Rajendran, Senior Research Fellow in Future Cities, Anglia Ruskin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why not read some of our other articles on smart cities: