Could recent backlash crash the not-so-smart city?

In May 2020, Google-affiliated Sidewalk Labs abruptly cancelled its smart city vision for Toronto’s waterfront, citing that “unprecedented economic uncertainty” created by the pandemic had made the project unachievable.

Named ‘Quayside’, the venture proposed a 12-acre development of sleek apartments and neighbourhood amenities that heavily incorporated data and technology into urban design and residents’ daily living.

Including an underground delivery system and ice-melting heated roads, the futuristic plan aimed to turn Toronto into the world’s first truly ‘smart city’.

Yet, the Quayside development faced fierce criticism before it could even get underway.

Planned for the heart of the development was the harvesting of an extensive flow of data, amassed by studying millions of residents’ daily movements through sensor-laden streets and buildings.

However, critics saw a darker side to Sidewalk Labs, fearing that residents’ data would be stored and used by Google. Such fears only intensified after a series of publicised data breaches at Big Tech companies.

US businessman Roger McNamee described the project as “the most highly evolved version to date of surveillance capitalism”, warning that Google would use “algorithms to nudge human behaviour” for corporate interests.

Despite Sidewalk’s assurances that the data collected wouldn’t be shared with third parties, Toronto city council members began to voice official concerns. A National Research Council report stated that Canada was in danger of becoming a “data cow” for foreign tech companies.

After years of a controversial public debacle that played out in court rooms and street protests, the proposals were eventually abandoned altogether.

An industry slowing down

The story of Quayside’s defeat perhaps has greater implications for the future of smart city culture. Toronto has coincided with numerous high-profile examples of downscaling in grand smart city projects across the world, such as Songdo in South Korea and the ill-famed Masdar City in Abu Dhabi.

In fact, the overall trend of the smart city sector is declining, as the regions with the most smart-city deployments have seen large drop-offs in new developments. For instance, the number of new projects in Europe increased year-on-year to a peak of 43 in 2016- yet fell to just 17 in 2020.

Likewise, data suggests that the major suppliers to government smart city projects have considerably weakened their influence on the sector. Since 2016, companies such as Cisco Systems, Vodafone and Telensa have greatly reduced the number of new developments that they are undertaking, whilst there are numerous examples of backtracking throughout the industry.

In late 2020, Cisco Systems announced that the company was scrapping its flagship smart-city software altogether. Such instances suggest at least a slowing down in production ventures or perhaps even a full-on shift in company priorities.

So, why is the smart city bandwagon beginning to falter?

Not ‘smart’ enough post-pandemic?

Whilst the privacy backlash movement that finished off Quayside is exemplary of existing privacy concerns before Covid-19, the pandemic may have further compounded the barriers faced by the smart city.

The hard-hitting financial implications and uncertainties created by the pandemic have presumably put ambitious smart city projects on the back burner, as city governments re-align their priorities towards economic recovery.

They’ve [smart city technology providers] all seen the challenges and the opportunities in this pandemic moment, says Nigel Jacob, co-chair of the Mayor’s Office of New Urban Mechanics, a civic-innovation research lab in Boston. “I think they are still struggling and looking at their product portfolio and looking to see what value they can add. I do think the field has shifted.“

Jacob suggests that the pre-Covid landscape of smart city promotion has ultimately shifted, a viewpoint that is echoed throughout the industry. Many believe that the pandemic has forced city governments and citizens to re-evaluate their priorities of what needs to be achieved through urban areas.

David Bicknell, principal thematic analyst for GlobalData, arguesSmart cities had their time. They are no longer about glossy, sensor-driven metropolises.“  He adds, “The impact of the pandemic and climate change now means smart cities cannot just be ‘smart’ – they must be resilient and sustainable, too.”

It could be argued that there is now a greater focus for citizens in creating tangible outcomes in their communities on the key issues of climate change, health and social equity.

Whilst the potential for technology to contribute to driving change in these areas is undoubted, the idea that a smart city business model should just be about the city getting smarter is difficult to uphold in the landscape of post-pandemic finances.

With the exception of climate change issues, the traditional smart city does not look to tackle the big issues that have really been reinforced by the pandemic, Jacob argues.

Privacy concerns here to stay

The pandemic also introduced a new array of concerns surrounding data collection. Contact tracing apps, biometric vaccine passports and temperature scanning as a condition to entering premises have added fuel to the fire of privacy issues that people are now encountering.

Added to this, some academics worry that whilst these technologies have been accepted into day-to-day life under unprecedented measures, it leaves open the possibility of such platforms being manipulated for more sinister purposes in the future.

And, with the numerous high profile legal cases surrounding Facebook, Amazon and Google’s privacy policies now regular features in the media, the public is certainly more aware in its understanding of privacy issues since the Quayside story.

Final Thoughts

Despite how strongly opposed many residents were to the Toronto Quayside development, it is clear that the integration of sensors, scanners and cameras into city living is here to stay. And there are undoubted benefits of smart technologies that are already evident in cities throughout the world- from intelligent LED street lighting to data-driven traffic control systems.

However, for the potential of smart technologies to be truly realised and accepted by the public, the smart city must be re-aligned to fit the privacy conscious post-pandemic world.


Further reading: more about smart cities on The Knowledge Exchange Blog

Cross-border handshakes: what’s next for digital contact tracing?

As we enter a new year, and a new phase of the Covid-19 pandemic, we are reminded of the need to follow public health advice to stop the spread of the virus. The emergence of new variants of Covid-19, which appear to be more transmissible, has resulted in tougher restrictions across the world. Although the emergence of new variants of Covid-19 can seem frightening, we are not powerless in preventing the spread of the virus; face coverings, social distancing, regular handwashing and self-isolating remain effective.

Additionally, the development and subsequent roll-out of numerous vaccines should provide us all with hope that there is light at the end of the tunnel. However, although vaccines appear to protect people from becoming seriously ill with the virus, there is still uncertainty regarding the impact vaccines will have on viral transmission of Covid-19.

Therefore, the need for those with symptoms to self-isolate, get tested and undergo contact tracing when a positive case is detected is likely to remain. This will become even more important in the months ahead, as we see the gradual re-opening of hospitality, leisure and tourism sectors.

Effectiveness of contact tracing

Contact tracing is a tried-and-tested public health intervention intended to identify individuals who may have been in contact with an infected person and advise them to take action that will disrupt chains of transmission. Prior to Covid-19, contact tracing was often used to prevent the spread of sexually transmitted infections, and has been heralded as vital to the eradication of smallpox in the UK.

According to modelling, published by the Lancet Infectious Diseases, a combination of self-isolation, effective contact tracing and social distancing measures, may be the most effective and efficient way to control the spread of Covid-19.

However, for contact tracing to be at its most effective, the modelling estimates that for every 1,000 new symptomatic cases, 15,000 to 41,000 contacts would have to be asked to self-isolate. Clearly, the logistical burden of operating a manual contract tracing system is high. As a result, governments have chosen to augment existing systems through the deployment of digital contract tracing apps, which are predominantly built using software developed by Apple and Google.

Digital contact tracing

As we go about our day-to-day lives, especially as restrictions are eased, it may not be possible to name everyone you have encountered over the previous 14 days if you later contract Covid-19. Digital contact tracing provides a solution to this issue by harnessing the Bluetooth technology within our phones to help identify and remember potential close contacts. Research by the University of Glasgow has found that contact tracing apps can contribute substantially to reducing infection rates when accompanied by a sufficient testing capability.

Most countries have opted to utilise a system developed by Apple and Google, known as Exposure Notifications, as the basis for digital contact tracing. Public health authorities have the option to either provide Apple and Google with the criteria which defines when an alert should be generated or develop their own app, such as the Scottish Government’s Protect Scotland.

Exposure notification system

In order to protect privacy, the exposure notification system can only be activated by a user after they have agreed to the terms; the system cannot be unilaterally activated by public health authorities or Apple and Google. 

Once activated, the system utilises Bluetooth technology to swap anonymised IDs with other users’ devices when they come into close contact. This has been described as an anonymous handshake. Public health authorities set what is considered as a close contact (usually contact at less than a 2-metre distance for over 15 minutes), and the app calculates proximity measurements over a 24-hour period.

Anonymised IDs are not associated with a user’s identity, change every 10-20 minutes and collected anonymised IDs are securely stored locally on user devices for a 14-day period (incubation period of Covid-19) before being deleted.

If a user tests positive for Covid-19, the public health authority will provide them with a code that confirms their positive diagnosis. This will then provide users with an option to upload collected anonymous IDs to a secure public health authority server. At least once a day, the user’s phone will check-in with this server to check if any of the anonymised IDs collected in the previous 14-days match up with a positive case. If there is a match, and the proximity criteria has been met, a user may receive a notification informing them of the need to self-isolate.

Analysis conducted by the National Institute for Health Research highlights that the use of contact tracing apps, in combination with manual contact tracing, could lead to a reduction in the number of secondary Covid-19 infections. Additionally, the analysis revealed that contact tracing apps identified more possible close contacts and reduced the amount of time it took to complete contact tracing. The analysis concluded that the benefits of digital contact tracing include the ability to trace contacts who may not be known to the infected individual and the overall reliability and security of digitally stored data, rather than an individual’s memory or diary.

Therefore, it could be said that digital contract tracing apps will be most effective when restrictions ease and we are more likely to be in settings where we may be in close contact with people we may not know, for example, when we’re on holiday or in a restaurant.

Cross-border handshakes

Covid-19 naturally does not respect any form of border, and as restrictions on domestic and international travel are relaxed, opportunities will arise for Coivd-19 to spread. In order to facilitate the reopening of the tourism sector, there have been calls for countries which have utilised the Exposure Notification system to enable these systems to interact.

Examples of interoperability already exist internally within the UK, as an agreement exists between Scotland, England and Wales, Northern Ireland, (plus Jersey, Guernsey and Gibraltar), that enables users to continue to receive exposure notifications when they visit an area they do not live in, without the need to download the local public health authority app.

EU Exposure Notification system interoperability, European Commission, 2020

Additionally, the European Union has also developed interoperability of the Exposure Notification system between member states, with a commitment to link 18 national contact tracing apps, establishing the world’s largest bloc of digital contact tracing. The EU views the deployment of linked apps as vital to re-establishing safe free movement of people between member states, for work as well as tourism.

Over the next few months, it is likely that links will be created across jurisdictions. For example, the Scottish Government has committed to investigating how interoperability can be achieved between the Scottish and EU systems. The interoperability of Northern Ireland and Ireland’s contact tracing app highlights that on a technical level there appears to be no barrier for this form of cross-jurisdiction interaction.  

Therefore, as restrictions ease, the interoperability of digital contact tracing apps may become a vital way in which to ensure safe travel, as we learn to live with the ongoing threat of Covid-19.

Final thoughts

Covid-19 has proven itself to be a persistent threat to our everyday lives. However, the deployment of effective vaccines provides us with hope that the threat will be minimized soon. Until then, the need to utilise contact tracing is likely to remain.

As the roll-out of mass-vaccination programmes accelerates, and restrictions are relaxed, we are likely to be in more situations where we will be in contact with more people, not all of whom we may necessarily know. This will be especially true as domestic and international tourism begins to re-open. In these scenarios, the Exposure Notification system, and interoperability between public health authority apps, will become increasingly vital to the operation of an effective contact tracing system.

In short, digital contact tracing may prove to be key to the safe re-opening of the tourism sector and enable users to easily and securely be contact traced across borders.


Follow us on Twitter to see which topics are interesting our research team.

Further reading: articles on COVID-19 and digital from The Knowledge Exchange blog

Knowledge from a distance: recent webinars on public and social policy

During the national lockdown, it’s been impossible for most of us to attend conferences and seminars. But many organisations have been harnessing the power of technology to help people share their knowledge, ideas and experience in virtual seminars.

In the past few weeks, the research officers at The Knowledge Exchange have joined some of these webinars, and in today’s blog post we’d like to share with you some of the public and social policy issues that have been highlighted in these online events.

The liveable city

Organised by the Danish Embassy in the UK, this webinar brought together a range of speakers from Denmark and the UK to consider how our cities may change post COVID-19, including questions around green space, high street recovery, active travel and density and types of residential living accommodation in our towns and cities.

Speakers came from two London boroughs, architectural design and urban planning backgrounds and gave examples of experiences in Newham, Ealing and Copenhagen as well as other more general examples from across the UK and Denmark. The seminar’s website also includes links to presentations on previous Liveable City events in Manchester, Edinburgh, Bristol and Glasgow.


What next for public health?

“Healthcare just had its 2008 banking crisis… COVID-19 has generated a real seismic shift within the sector and I don’t think we will ever go back”

This webinar brought together commentators and thought leaders from across the digital health and tech sectors to think about how public health may be transformed by our experiences of the COVID-19 pandemic and the significant shift to digital and online platforms to deliver care.

The speakers discussed data, privacy and trust and the need to recognise different levels of engagement with digital platforms to ensure that specific groups like older people don’t feel unable to access services. They also discussed the importance of not being driven by data, but using data to help us to make better decisions. The webinar was organised by BIMA, a community of businesses, charities and academia across the UK.


Green cities

This project, organised by the Town and Country Planning Association (TCPA), included 3 webinars each looking at different elements of green infrastructure within cities, including designing and planning, assessing the quality of different types of green infrastructure and highlighting the positive impacts of incorporating more good quality green spaces for mental and physical health, as well as for environmental purposes.


Rough sleeping and homelessness during and after the coronavirus

Organised by the Centre for London, this webinar brought together speakers from across the homelessness sector within London, including St Mungos, the Greater London Authority (GLA) and Croydon Council to explore how the COVID-19 pandemic was impacting people who are homeless or sleeping rough in the city.

Each speaker brought insights from their own experiences supporting homeless people in the capital (so far) during the COVID 19-pandemic. They highlighted some of the challenges, as well as some of the more positive steps forward, particularly in relation to co-operation and partnership working across different levels of government and with other sectors such as health.

They also commended everyone involved for the speed at which they acted to support homeless people, particularly those who were vulnerable or at risk. However, concerns were also raised around future planning and the importance of not regressing back into old ways of working once the pandemic response tails off.


Poverty, health and Covid-19: emerging lessons in Scotland

This webinar was hosted by the Poverty Alliance as part of a wider series that they are hosting.  It looked at how to ‘build back better’ following the pandemic, with a particular focus upon addressing the long-standing inequalities that exist throughout society.

The event included presentations from Dr Gerry McCartney, Head of the Public Health Observatory at Public Health Scotland, Dr Anne Mullin, Chair of the Deep End GPs, and Professor Linda Bauld, Professor of Public Health at University of Edinburgh.

A key message throughout was that while the immediate health impacts of the pandemic have been huge, there is an urgent need to acknowledge and address the “long-term challenge” – the impact on health caused by the economic and social inequalities associated with the pandemic.

It is estimated that over 10 years, the impact of inequalities will be six times greater than that of an unmitigated pandemic. Therefore, ‘building back better’ is essential in order to ensure long-term population health.


Returning to work: addressing unemployment after Covid-19

This webinar was also hosted by the Poverty Alliance as part of their wider webinar series on the pandemic.

The focus here was how to address the inevitable rise in unemployment following the pandemic – the anticipated increase in jobless numbers is currently estimated to be over three million.

The event included presentations from Kathleen Henehan, Research and Policy Analyst at Resolution Foundation, Anna Ritchie Allan, Executive Director at Close the Gap, and Tony Wilson, Director of the Institute for Employment Studies.

The webinar highlighted the unprecedented scale of the problem – noting that more than half of the working population are currently not working due to the pandemic, being either unemployed, furloughed or in receipt of self-employment support.

A key theme of the presentation was that certain groups are likely to be disproportionately affected by unemployment as the support provided by the government’s support schemes draw to a close later this year.  This includes women – particularly those from BAME groups, the lower paid and migrants – and young people.  So it’s essential that the support provided by the government in the form of skills, training, job creation schemes etc addresses this, and is both gender-sensitive and intersectional.


Supporting the return to educational settings of autistic children and young people

The aim of this webinar, provided by the National Autism Implementation Team (NAIT), was to offer a useful overview of how to support autistic children and young people, and those with additional support needs, back into educational settings following the pandemic.

Currently around 25% of learners in mainstream schools have additional support needs, and it is generally accepted that good autism practice is beneficial for all children.

The webinar set out eight key messages for supporting a successful return, which included making anticipatory adjustments rather than ‘waiting and seeing’, using visual supports, providing predictability, planning for movement breaks and provision of a ‘safe space’ for each child.  The importance of listening to parents was also emphasised.


P1050381.JPG

Ellisland Farm, Dumfries. “P1050381.JPG” by ejbluefolds is licensed under CC BY-NC 2.0

Burns at Ellisland

Our Research Officer, Donna Gardiner has also been following some cultural webinars, including one that focused on the links between Scotland’s national poet and the Ellisland Farm site. The webinar was led by Professor Gerard Carruthers, Francis Hutcheson Chair of Scottish Literature at the University of Glasgow and co-director of the Centre for Robert Burns Studies.

Robert Burns lived at Ellisland Farm in Dumfriesshire between May 1788 and November 1791, and is where he produced a significant proportion of his work – 23% of his letters and 28% of his songs and poems, including the famous Tam O’Shanter and Auld Lang Syne.

The presentation looked at how Robert Burns was influenced by the farm itself and its location on the banks of the River Nith.  It also touched on his involvement with local politics and friends in the area, which too influenced his work.

It was suggested that the Ellisland farm site could be considered in many ways to be the birthplace of wider European Romanticism. The webinar also included contributions from Joan McAlpine MSP, who is chair of the newly formed Robert Burns Ellisland Trust. She discussed how to help promote and conserve this historic site, particularly given the impact of the coronavirus on tourism.


Follow us on Twitter to see which topics are interesting our research team.

Facial recognition systems: ready for prime time?

by Scott Faulds

Across the UK, it is estimated that there are 1.85 million CCTV cameras, approximately one camera for every 36 people.  From shopping centres to railway stations, CCTV cameras have become a normal part of modern life and modern policing, with research from the College of Policing indicating that CCTV modestly reduces overall crime. Currently, most of the cameras utilised within the CCTV system are passive; they act as a deterrent or provide evidence of an individual’s past location or of a crime committed.

However, advances in artificial intelligence have allowed for the development of facial recognition systems which could enable CCTV cameras to proactively identify suspects or active crime in real-time. Currently, the use of facial recognition systems in limited pilots has received a mixed reaction, with the Metropolitan Police arguing that it is their duty to use new technologies to keep people safe. But privacy campaigners argue that the technology possesses a serious threat to civil liberties and are concerned that facial recognition systems contain gender and racial bias.

How does it work?

Facial recognition systems operate in a similar way to how humans recognise faces, through identifying familiar facial characteristics, but on a much larger and data driven way. Whilst there are a variety of different types of facial recognition system, the basic steps are as follows:

An image of a face is captured either within a photograph, video or live footage. The face can be within a crowd and does not necessarily have to be directly facing a camera.

Facial recognition software biometrically scans the face and converts unique facial characteristics (distance between your eyes, distance from forehead to chin etc) into a mathematical formula known as a facial signature.

The facial signature can then be compared to faces stored within a database (such as a police watchlist) or faces previously flagged by the system.

The system then determines if it believes it has identified a match; in most systems the level of confidence required before the system flags a match can be altered.

Facial recognition and the police

Over the past twelve months, the Metropolitan Police and South Wales Police have both operated pilots of facial recognition systems, designed to identify individuals wanted for serious and violent offences. These pilots involved the placement of facial recognition cameras in central areas, such as Westfield Shopping Centre, where large crowds’ faces were scanned and compared to a police watch-list. If the system flags a match, police officers would then ask the potential match to confirm their identify and if the match was correct, they would be detained. Police forces have argued that the public broadly support the deployment of facial recognition and believe that the right balance has been found between keeping the public safe and protecting individual privacy.

The impact of the deployment of facial recognition by the police has been compared by some to the introduction of fingerprint identification. However, it is difficult to determine how successful these pilots have been, as there has been a discrepancy regarding the reporting of the accuracy of these facial recognition systems. According to the Metropolitan Police, 70% of wanted suspects would be identified walking past facial recognition cameras, whilst only one in 1,000 people would generate a false alert, an error rate of 0.1%.  Conversely, independent analysis commissioned by the Metropolitan Police, has found that only eight out of 42 matches were verified as correct, an error rate of 81%.

The massive discrepancy in error rates can be explained by the way in which you asses the accuracy of a facial recognition system. The Metropolitan Police measure accuracy by comparing successful and unsuccessful matches with the total number of faces scanned by the facial recognition system. Independent researchers, on the other hand, asses the accuracy of the flags generated by the facial recognition system. Therefore, it is unclear as to how accurate facial recognition truly is, nevertheless, the Metropolitan Police have now begun to use live facial recognition cameras operationally.

Privacy and bias

Civil liberties groups, such as Liberty and Big Brother Watch, have a raised a variety of concerns regarding the police’s use of facial recognition. These groups argue that the deployment of facial recognition systems presents a clear threat to individual privacy and privacy as a social norm. Although facial recognition systems used by the police are designed to flag those on watch-lists, every single person that comes into the range of a camera will automatically have their face biometrically scanned. In particular, privacy groups have raised concerns about the use of facial recognition systems during political protests, arguing that their use may constitute a threat to the right to freedom of expression and may even represent a breach of human rights law. 

Additionally, concerns have been raised regarding racial and gender bias that have been found to be prevalent in facial recognition systems across the world. A recent evaluative study conducted by the US Government’s National Institute of Standards and Technology on 189 facial recognition algorithms has found that most algorithms exhibit “demographic differentials”. This means that a facial recognition system’s ability to match two images of the same person varies depending on demographic group. This study found that facial recognition systems were less effective at identifying BAME and female faces, this means that these groups are statistically more likely to be falsely flagged and potentially questioned by the police.

Final thoughts

From DNA to fingerprint identification, the police are constantly looking for new and innovative ways to help keep the public safe. In theory, the use of facial recognition is no different, the police argue that the ability to quickly identify a person of interest will make the public safer. However, unlike previous advancements, the effectiveness of facial recognition is largely unproven.

Civil liberties groups are increasingly concerned that facial recognition systems may infringe on the right to privacy and worry that their use will turn the public into walking biometric ID cards. Furthermore, research has indicated that the vast majority of facial recognition systems feature racial and gender bias, this could lead to women and BAME individuals experiencing repeated contact with the police due to false matches.

In summary, facial recognition systems provide the police with a new tool to help keep the public safe. However, in order to be effective and gain the trust of the public, it will be vital for the police to set out the safeguards put in place to prevent privacy violations and the steps taken to ensure that the systems do not feature racial and gender bias.  


Follow us on Twitter to see which topics are interesting our Research Officers this week.

If you enjoyed this article you may also like to read:

Icons made by monkik from www.flaticon.com

An app a day … how m-health could revolutionise our engagement with the NHS

It seems like almost every day now we see in the news and read in newspapers about the increasing pressures on our NHS, strains on resources and the daily challenges facing already overworked GP staff.

Mobile health applications (m-health apps) are increasingly being integrated into practice and are now being used to perform some tasks which would have traditionally been performed by general practitioners (GPs), such as those involved in promoting health, preventing disease, diagnosis, treatment, monitoring, and signposting to other health and support services.

How m-health is transforming patient interactions with the NHS

In 2015 International Longevity Centre research found some distinct demographic divides on health information seeking behaviour. While 50% of those aged 25-34 preferred to receive health information online, only 15% of those aged 65 and over preferred the internet. The internet remained the favourite source of health information for all age groups younger than 55. And while not specifically referring to apps, the fact that many people in this research expressed a preference to seek health information online indicates that there is potential for wider use of effective, and NHS approved health apps.

A report published in 2019 by Reform highlighted the unique opportunity that m-health offered in the treatment and management of mental health conditions. The report found that in the short to medium-term, much of the potential of apps and m-health lies in relieving the pressure on frontline mental health services by giving practitioners more time to spend on direct patient care and providing new ways to deliver low-intensity, ongoing support. In the long-term, the report suggests, data-driven technologies could lead to more preventative and precise care by allowing for new types of data-collection and analysis to enhance understandings of mental health.

M-health, e-health and telecare are also potentially important tools in the delivery of rural care, particularly to those who are elderly or who live in remote parts of the UK. This enables them to submit relevant readings to a GP or hospital consultant without having to travel to see them in person and allowing them to receive updates, information and advice on their condition without having to travel to consult a doctor or nurse face-to-face. However, some have highlighted that this removal of personal contact could leave some patients feeling isolated, unable to ask questions and impact on the likelihood of carrying out treatment, particularly among older people, if they feel it has been prescribed by a “machine” and not a doctor.

Supporting people to take ownership of their own health

Research has suggested that wearable technologies, not just m-health apps, but across-the-board, including devices like “fitbits”, are acting as incentives to help people self-regulate and promote healthier activities such as more walking or drinking more water. One study found that different tracking and monitoring tools that collect and analyse health and wellness data over time can inform consumers of their baseline activity level, encourage personal engagement in health and wellbeing, and ultimately lead to positive behavioural change. Another report from the International Longevity Centre also highlights the potential impact of apps on preventative healthcare; promoting behaviour change and encouraging people to make healthier choices such as stopping smoking or reducing alcohol intake.

Home testing kits for conditions such as bowel cancer and remote sensors to monitor blood sugar levels in type 1 diabetics are also becoming more commonplace as methods to help people take control of monitoring their own health. Roll-outs of blood pressure and heart rhythm monitors enable doctors to see results through an integrated tablet, monitor a patient’s condition remotely, make suggestions on changes to medication or pass comments on to patients directly through an email or integrated chat system, without the patient having to attend a clinic in person.

Individual test kits from private sector firms, including “Monitor My Health” are now also increasingly available for people to purchase. People purchase and complete the kits, which usually include instructions on home blood testing for conditions like diabetes, high cholesterol and vitamin D deficiency. The collected samples are then returned via post, analysed in a laboratory and the results communicated to the patient via an app, with no information about the test stored on their personal medical records. While the app results will recommend if a trip to see a GP is necessary, there is no obligation on the part of the company involved or the patient to act on the results if they choose not to. The kits are aimed at “time-poor” people over the age of 16, who want to “take control of their own healthcare”, according to the kit’s creator, but some have suggested that instead of improving the patient journey by making testing more convenient, lack of regulation could dilute the quality of testing Removing the “human element”, they warn, particularly from initial diagnosis consultations, could lead to errors.

But what about privacy?

Patient-driven healthcare which is supported and facilitated by the use of e-health technologies and m-health apps is designed to support an increased level of information flow, transparency, customisation, collaboration and patient choice and responsibility-taking, as well as quantitative, predictive and preventive aspects for each user. However, it’s not all positive, and concerns are already being raised about the collection and storage of data, its use and the security of potentially very sensitive personal data.

Data theft or loss is one of the major security concerns when it comes to using m-health apps. However, another challenge is the unwitting sharing of data by users, which despite GDPR requirements can happen when people accept terms and conditions or cookie notices without fully reading or understanding the consequences for their data. Some apps, for example, collect and anonymise data to feed into further research or analytics about the use of the app or sell it on to third parties for use in advertising.

Final thoughts

The integration of mobile technologies and the internet into medical diagnosis and treatment has significant potential to improve the delivery of health and care across the UK, easing pressure on frontline staff and services and providing more efficient care, particularly for those people who are living with long-term conditions which require monitoring and management.

However, clinicians and researchers have been quick to emphasise that while there are significant benefits to both the doctor and the patient, care must be taken to ensure that the integrity and trust within the doctor-patient relationship is maintained, and that people are not forced into m-health approaches without feeling supported to use the technology properly and manage their conditions effectively. If training, support and confidence of users in the apps is not there, there is the potential for the roll-out of apps to have the opposite effect, and lead to more staff answering questions on using the technology than providing frontline care.


Follow us on Twitter to see which topics are interesting our Research Officers this week.

If you enjoyed this article you may also like to read:

“We’ve updated our privacy policy”: GDPR two years on

by Scott Faulds

Almost two years ago, the General Data Protection Regulation (GDPR) came into force across the European Union (EU) and European Economic Area (EEA), creating what many consider to be the most extensive data protection regulation in the world. The introduction of GDPR facilitated the harmonisation of data privacy laws across Europe and provided citizens with greater control over how their data is used. The regulation sets out the rights of data subjects, the duties of data controllers/processors, the transfer of personal data to third countries, supervisory authorities, cooperation among member states, and remedies, liability or penalties for breach of rights. However, whilst the regulation itself is extensive, questions have been raised regarding the extent to which GDPR has been successful at protecting citizens’ data and privacy.

Breach Notifications and Fines

Critics of GDPR have argued that whilst the regulation has been effective as a breach notification law, it has so far failed to impose impactful fines on companies which have failed to comply with the GDPR. National data protection authorities (such as the Information Commissioner’s Office (ICO) in the UK) under the GDPR have the ability to impose fines of up to €20m or up to 4% of an organisation’s total global turnover, whichever is higher. Since the introduction of the GDPR, data protection authorities across the EEA have experienced a “massive increase” in reports of data breaches. However, this has yet to translate into substantive financial penalties. For example, Google has been issued a €50m fine, the highest issued so far* by CNIL, the French data protection authority. CNIL found that Google failed to provide sufficient and transparent information that allowed customers to give informed consent to the processing of personal data when creating a Google account during the set-up process of an Android powered device. This is a serious breach of multiple GDPR articles and CNIL argued that the infringements contravene the principles of transparency and informed consent which are at the heart of the GDPR.

*  The confirmation of record fines issued by ICO to British Airways (£183m) and Marriott International (£99m) has been delayed until 31st March 2020.

However, the fine imposed on Google amounts to approximately 0.04% of their total global turnover, which some have argued is simply too small an amount to act as any real deterrent. Therefore, it could be said that while GDPR has been effective in encouraging companies to be transparent when data misuse occurs, national data protection authorities have yet to make use of their ability to impose large financial penalties to act as a deterrent.

In recent months, the German and Dutch data protection authorities have both created frameworks which set out how they intend to calculate GDPR fines. Analysis of their fining structures indicates that both models will operate based on the severity of GDPR violation. However, both structures allow for the data protection authority to impose the maximum fine if the amount is not deemed fitting. The International Association of Privacy Professionals believes this will result in significantly higher and more frequent fines than those issued previously, and has suggested that it is possible that the European Data Protection Board may consider implementing a harmonized fine model across Europe.

Brussels Effect

The effects of the GDPR can be felt beyond Europe, with companies such as Apple and Microsoft committing to extend GDPR protections to their entire customer base, no matter their location.  Even the COO of Facebook, Sheryl Sandberg, admitted that the introduction of GDPR was necessary due to the scale of data collected by technology companies. The ability of the EU to influence the global regulatory environment has been described by some experts as the “Brussels Effect”. They argue that a combination of the size, importance and regulatory power of the EU market is forcing companies around the world to match EU regulations. Additionally, this effect can be seen to be influencing data protection legislation across the world, with governments in Canada, Japan, New Zealand, Brazil, South Africa and California all introducing updated privacy laws based on the GDPR. As a result, it can be said that the introduction of the GDPR has enabled the EU to play a key role in global discussions regarding privacy and how citizens’ data is used worldwide. 

Brexit

Following the UK’s exit from the EU, the GDPR will remain in force until the end of the transition period (31st December 2020), after this point it is the intention of the UK Government to introduce the UK GDPR. However, as the UK will no longer be a member state of the EU, it will require to seek what is known as an “adequacy agreement” with the EU.This allows businesses in the EEA and UK to freely exchange data. The UK government believes that this agreement will be signed during the transition period, as the UK GDPR is not materially different from the EU GDPR. However, it should be noted that the most recent adequacy agreement between the EU and Japan took two years to complete.

Final Thoughts

The introduction of the GDPR almost two years ago has had a variety of impacts on the current discussion surrounding privacy and how best to protect our personal data. Firstly, the GDPR has forced companies to become more transparent when data misuse occurs and gives national data protection authorities the power to scrutinise companies’ approaches to securing personal data. Secondly, the influence of the GDPR has helped to strengthen privacy laws across the world and has forced companies to provide individuals with more control over how their data is used. However, the effectiveness of GDPR is limited due to a lack of common approach regarding fines in relation to GDPR violations. In order to develop fully, it will be important for the European Data Protection Board to provide guidance on how to effectively fine those who breach the GDPR.


If you enjoyed this post, you may also like some of our other posts related to GDPR:

Follow us on Twitter to see what topics are interesting our research team.

How AI is transforming local government

Robot

By Steven McGinty

Last year, Scottish Local Government Chief Digital Officer Martyn Wallace spoke to the CIO UK podcast and highlighted that in 2019 local government must take advantage of artificial intelligence (AI) to deliver better outcomes for citizens. He explained:

“I think in the public sector we have to see AI as a way to deliver better outcomes and what I mean by that is giving the bots the grunt work – as one coworker called it, ‘shuffling spreadsheets’ – and then we can release staff to do the more complex, human-touch things.”

To date, very few councils have felt brave enough to invest in AI. However, the mood is slowly starting to change and there are several examples in the UK and abroad that show artificial intelligence is not just a buzzword, but a genuine enabler of change.

In December, Local Government Minister Rishi Sunak announced the first round of winners from a £7.5million digital innovation fund. The 16 winning projects, from 57 councils working in collaborative teams, were awarded grants of up to £100,000 to explore the use of a variety of digital technologies, from Amazon Alexa style virtual assistants to support people living in care, to the use of data analytics to improve education plans for children with special needs.

These projects are still in their infancy, but there are councils who are further along with artificial intelligence, and have already learned lessons and had measurable successes. For instance, Milton Keynes Council have developed a virtual assistant (or chatbot) to help respond to planning-related queries. Although still at the ‘beta’ stage, trials have shown that the virtual assistant is better able to validate major applications, as these are often based on industry standards, rather than household applications, which tend to be more wide-ranging.

Chief planner, Brett Leahy, suggests that introducing AI will help planners focus more on substantive planning issues, such as community engagement, and let AI “take care of the constant flow of queries and questions”.

In Hackney, the local council has been using AI to identify families that might benefit from additional support. The ‘Early Help Predictive System’ analyses data related to (among others) debt, domestic violence, anti-social behaviour, and school attendance, to build a profile of need for families. By taking this approach, the council believes they can intervene early and prevent the need for high cost support services. Steve Liddicott, head of service for children and young people at Hackney council, reports that the new system is identifying 10 or 20 families a month that might be of future concern. As a result, early intervention measures have already been introduced.

In the US, the University of Chicago’s initiative ‘Data Science for Social Good’ has been using machine learning (a form of AI) to help a variety of social-purpose organisations. This has included helping the City of Rotterdam to understand their rooftop usage – a key step in their goal to address challenges with water storage, green spaces and energy generation. In addition, they’ve also helped the City of Memphis to map properties in need of repair, enabling the city to create more effective economic development initiatives.

Yet, like most new technologies, there has been some resistance to AI. In December 2017, plans by Ofsted to use machine learning tools to identify poorly performing schools were heavily criticised by the National Association of Head Teachers. In their view, Ofsted should move away from a data-led approach to inspection and argued that it was important that the “whole process is transparent and that schools can understand and learn from any assessment.”

Further, hyperbole-filled media reports have led to a general unease that introducing AI could lead to a reduction in the workforce. For example, PwC’s 2018 ‘UK Economic Outlook’ suggests that 18% of public administration jobs could be lost over the next two decades. Although its likely many jobs will be automated, no one really knows how the job market will respond to greater AI, and whether the creation of new jobs will outnumber those lost.

Should local government investment in AI?

In the next few years, it’s important that local government not only considers the clear benefits of AI, but also addresses the public concerns. Many citizens will be in favour of seeing their taxes go further and improvements in local services – but not if this infringes on their privacy or reduces transparency. Pilot projects, therefore, which provide the opportunity to test the latest technologies, work through common concerns, and raise awareness among the public, are the best starting point for local councils looking to move forward with this potentially transformative technology.


Follow us on Twitter to discover which topics are interesting our research team.

Protecting privacy in the aftermath of the Facebook-Cambridge Analytica scandal

By Steven McGinty

On 4 June, Information Commissioner Elizabeth Denham told MEPs that she was ‘deeply concerned’ about the misuse of social media users’ data.

She was speaking at the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) inquiry into the use of 87 million Facebook profiles by Cambridge Analytica and its consequences for data protection and the wider democratic process. The whole affair has shone a light on how Facebook collected, shared, and used data to target people with political and commercial advertising. And, in a warning to social media giants, she announced:

Online platforms can no longer say that they are merely a platform for content; they must take responsibility for the provenance of the information that is provided to users.”

Although this is tough talk from the UK’s guardian of information rights – and many others, including politicians, have used similar language – the initial response from the Information Commissioner was hardly swift.

The Information Commissioners Office (ICO) struggled at the first hurdle, failing to secure a search warrant for Cambridge Analytica’s premises. Four days after the Elizabeth Denham announced her intention to raid the premises, she was eventually granted a warrant following a five-hour hearing at the Royal Courts of Justice. This delay – and concerns over the resources available to the ICO – led commentators to question whether the regulator has sufficient powers to tackle tech giants such as Facebook.

Unsurprisingly, it was not long before the Information Commissioner went into “intense discussion” with the government to increase the powers at her disposal. At a conference in London, she explained:

Of course, we need to respect the rights of companies, but we also need streamlined warrant processes with a lower threshold than we currently have in the law.”

Conservative MP, Damien Collins, Chair of the Digital, Culture, Media and Sport select committee, expressed similar sentiments, calling for new enforcement powers to be included in the Data Protection Bill via Twitter:

Eventually, after a year of debate, the Data Protection Act 2018 was passed on the 23 May. On the ICO blog, Elizabeth Denham welcomed the new law, highlighting that:

The legislation requires increased transparency and accountability from organisations, and stronger rules to protect against theft and loss of data with serious sanctions and fines for those that deliberately or negligently misuse data.”

By introducing this Act, the UK Government is attempting to address a number of issues. However, the Information Commissioner, will be particularly pleased that she’s received greater enforcement powers, including creating two new criminal offences: the ‘alteration etc of personal data to prevent disclosure‘ and the ‘re-identification of de-identified personal data’.

GDPR

On 25 May, the long awaited General Data Protection Regulation (GDPR) came into force. The Data Protection Act incorporates many of the provisions of GDPR, such as the ability to levy heavy fines on organisations (up to €20,000,000 or 4% of global turnover). The Act also derogates from EU law in areas such as national security and the processing of immigration-related data. The ICO recommend that GDPR and the Data Protection Act 2018 are read side by side.

However, not everyone is happy with GDPR and the new Data Protection Act. Tomaso Falchetta, head of advocacy and policy at Privacy International, has highlighted that although they welcome the additional powers given to the Information Commissioner, there are concerns over the:

wide exemptions that undermine the rights of individuals, particularly with a wide exemption for immigration purposes and on the ever-vague and all-encompassing national security grounds”.

In addition, Dominic Hallas, executive director of The Coalition for a Digital Economy (Coadec), has warned that we must avoid a hasty regulatory response to the Facebook-Cambridge Analytica scandal. He argues that although it’s tempting to hold social media companies liable for the content of users, there are risks in taking this action:

Pushing legal responsibility onto firms might look politically appealing, but the law will apply across the board. Facebook and other tech giants have the resources to accept the financial risks of outsized liability – startups don’t. The end result would entrench the positions of those same companies that politicians are aiming for and instead crush competitors with fewer resources.

Final thoughts

The Facebook-Cambridge Analytical scandal has brought privacy to the forefront of the public’s attention. And although the social media platform has experienced minor declining user engagement and the withdrawal of high profile individuals (such as inventor Elon Musk), its global presence and the convenience it offers to users suggests it’s going to be around for some time to come.

Therefore, the ICO and other regulators must work with politicians, tech companies, and citizens to have an honest debate on the limits of privacy in a world of social media. The GDPR and the Data Protection Act provide a good start in laying down the ground rules. However, in the ever-changing world of technology, it will be important that this discussion continues to find solutions to future challenges. Only then will we avoid walking into another global privacy scandal.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team. 

If you found this article interesting, you may also like to read our other digital articles.

How data leaks can bring down governments

Swedish Parliament building

By Steven McGinty

In July 2017, the Swedish Government faced a political crisis after admitting a huge data leak that affected almost all of its citizens.

The leak, which dates back to a 2015 outsourcing contract between the Swedish Transport Agency and IBM Sweden, occurred when IT contractors from Eastern Europe were allowed access to confidential data without proper security clearance. Media reports suggested that the exposed data included information about vehicles used by the armed forces and the police, as well as the identities of some security and military personnel.

The political fallout was huge for Sweden’s minority government. Infrastructure Minister Anna Johansson and Interior Minister Anders Ygeman both lost their positions, whilst the former head of the transport agency, Maria Ågren, was found to have been in breach of the country’s privacy and data protection laws when she waived the security clearance of foreign IT workers. In addition, the far-right Sweden Democrats were calling for an early election and Prime Minister Stefan Löfven faced a vote of no-confidence in parliament (although he easily survived).

However, it’s not just Sweden where data leaks have become political. Last year, the UK saw several high-profile incidents.

Government Digital Service (GDS)

The UK Government’s main data site incorrectly published the email addresses and “hashed passwords” of its users. There was no evidence that data had been misused, but the GDS recommended that users change their password as a precaution. And although users did not suffer any losses, it’s certainly embarrassing for the agency responsible for setting the UK’s digital agenda.

Scottish Government

Official documents revealed that Scottish Government agencies experienced “four significant data security incidents” in 2016-17. Three out of four of these cases breached data protection legislation.

Disclosure Scotland, a body which often deals with highly sensitive information through its work vetting individuals’, was one organisation that suffered a data leak. This involved a member of staff sending a mass email, in which email addresses could be viewed by all the recipients (a breach of the Data Protection Act).

Murdo Fraser, MSP for the Scottish Conservatives, criticised the data breaches, warning:

These mistakes are entirely the fault of the Scottish government and, worryingly, may signal security weaknesses that hackers may find enticing.”

Hacking parliaments

In the summer of 2017, the UK parliament suffered a ‘brute force’ attack, resulting in 90 email accounts with weak passwords being hacked and part of the parliamentary email system being taken offline. A few months later, the Scottish Parliament experienced a similar sustained attack on parliamentary email accounts. MPs have suggested Russia or North Korea could be to be blame for both attacks.

MPs sharing passwords

In December 2017, the Information Commissioner warned MPs over sharing passwords. This came after a number of Conservative MPs admitted they shared passwords with staff. Conservative MP Nadine Dorries explained:

My staff log onto my computer on my desk with my login every day. Including interns on exchange programmes.”

Their remarks were an attempt to defend the former First Secretary of State, Damian Green, over allegations he watched pornography in his parliamentary office.

Final thoughts

The Swedish data leak shows the political consequences of failing to protect data. The UK’s data leaks have not led to the same level of political scrutiny, but it’s important that UK politicians stay vigilant and ensure data protection is a key priority. Failure to protect citizen data may not only have financial consequences for citizens, but could also erode confidence in public institutions and threaten national security.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team. 

Drones in the city: should we ban drone hobbyists?

A young boy flying a drone

By Steven McGinty

Drones are becoming an increasingly observable feature of modern cities, from tech enthusiasts flying drones in local parks to engineers using them to monitor air pollution. And there have also been some high profile commercial trials such as Amazon Prime Air, an ambitious 30-minute delivery service.

However, introducing drones into the public realm has been something of a bumpy ride. Although the Civil Aviation Authority (CAA) produces guidance to ensure drones are flown safely and legally, there has been a number of hazardous incidents.

For example, in April, the first near-miss involving a passenger jet and more than one drone was recorded. The incident at Gatwick Airport saw two drones flying within 500m of an Airbus A320, with one pilot reporting a “significant risk of collision” had they been on a different approach path. In addition – and just 30 minutes later – one of these drones flew within 50m of another passenger jet, a Boeing 777.

Videos have also been uploaded to websites such as YouTube, which have clearly been taken from drones – a clear breach of the CAA’s rules prohibiting the flying of drones over or within 150m of built-up areas. This includes events such as the Cambridge Folk Festival, a match at Liverpool FC’s Anfield Stadium, and Nottingham’s Goose Fair. Jordan Brooks, who works for Upper Cut Productions – a company which specialises in using drones for aerial photography and filming – explains that:

They look like toys. For anyone buying one you feel like you’re flying a toy ‘copter when actually you’ve got a hazardous helicopter that can come down and injure somebody.

Privacy concerns have also started to emerge. Sally Annereau, data protection analyst at law firm Taylor Wessing, highlights a recent European case which held that a suspect’s rights had been infringed by a homeowner’s CCTV recording him whilst he was in a public place. Although not specifically about drones, Sally Annereau suggests this decision will have far reaching consequences, with potential implications for drone users recording in public and sharing their footage on social media sites. The Information Commissioner’s Office (ICO) has already issued guidance for drones.

The CAA report that there were more than 3,456 incidents involving drones in 2016. This is a significant increase on the 1,237 incidents in 2015.

The response

Cities have often taken contradictory approaches to drones. Bristol City Council has banned their use in the majority of its parks and open spaces. Similarly, several London boroughs have introduced ‘no drone zones’, although the London Borough of Richmond upon Thames has a relatively open policy, only banning drones over Richmond Park. Further, Lambeth Council requires hobbyists to complete an application form “to ensure suitability”, a standard similar to commercial drone pilots.

There have also been several accusations of double standards as large commercial operators such as Amazon receive exemptions to CAA rules, in front of photographers recording events, hospitals delivering blood, and researchers collecting data.

Although cities have a responsibility to protect the public, they also have to ensure citizens are able to exercise their rights. The air is a common space, and as such cities must ensure that hobbyists – as well as multinational firms – can enjoy the airspace. Thus, it might be interesting to see cities take a more positive approach and designate ‘drone zones’, where hobbyists can get together and fly their drones away from potential hazards.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team.