Knowledge from a distance: recent webinars on public and social policy

During the national lockdown, it’s been impossible for most of us to attend conferences and seminars. But many organisations have been harnessing the power of technology to help people share their knowledge, ideas and experience in virtual seminars.

In the past few weeks, the research officers at The Knowledge Exchange have joined some of these webinars, and in today’s blog post we’d like to share with you some of the public and social policy issues that have been highlighted in these online events.

The liveable city

Organised by the Danish Embassy in the UK, this webinar brought together a range of speakers from Denmark and the UK to consider how our cities may change post COVID-19, including questions around green space, high street recovery, active travel and density and types of residential living accommodation in our towns and cities.

Speakers came from two London boroughs, architectural design and urban planning backgrounds and gave examples of experiences in Newham, Ealing and Copenhagen as well as other more general examples from across the UK and Denmark. The seminar’s website also includes links to presentations on previous Liveable City events in Manchester, Edinburgh, Bristol and Glasgow.


What next for public health?

“Healthcare just had its 2008 banking crisis… COVID-19 has generated a real seismic shift within the sector and I don’t think we will ever go back”

This webinar brought together commentators and thought leaders from across the digital health and tech sectors to think about how public health may be transformed by our experiences of the COVID-19 pandemic and the significant shift to digital and online platforms to deliver care.

The speakers discussed data, privacy and trust and the need to recognise different levels of engagement with digital platforms to ensure that specific groups like older people don’t feel unable to access services. They also discussed the importance of not being driven by data, but using data to help us to make better decisions. The webinar was organised by BIMA, a community of businesses, charities and academia across the UK.


Green cities

This project, organised by the Town and Country Planning Association (TCPA), included 3 webinars each looking at different elements of green infrastructure within cities, including designing and planning, assessing the quality of different types of green infrastructure and highlighting the positive impacts of incorporating more good quality green spaces for mental and physical health, as well as for environmental purposes.


Rough sleeping and homelessness during and after the coronavirus

Organised by the Centre for London, this webinar brought together speakers from across the homelessness sector within London, including St Mungos, the Greater London Authority (GLA) and Croydon Council to explore how the COVID-19 pandemic was impacting people who are homeless or sleeping rough in the city.

Each speaker brought insights from their own experiences supporting homeless people in the capital (so far) during the COVID 19-pandemic. They highlighted some of the challenges, as well as some of the more positive steps forward, particularly in relation to co-operation and partnership working across different levels of government and with other sectors such as health.

They also commended everyone involved for the speed at which they acted to support homeless people, particularly those who were vulnerable or at risk. However, concerns were also raised around future planning and the importance of not regressing back into old ways of working once the pandemic response tails off.


Poverty, health and Covid-19: emerging lessons in Scotland

This webinar was hosted by the Poverty Alliance as part of a wider series that they are hosting.  It looked at how to ‘build back better’ following the pandemic, with a particular focus upon addressing the long-standing inequalities that exist throughout society.

The event included presentations from Dr Gerry McCartney, Head of the Public Health Observatory at Public Health Scotland, Dr Anne Mullin, Chair of the Deep End GPs, and Professor Linda Bauld, Professor of Public Health at University of Edinburgh.

A key message throughout was that while the immediate health impacts of the pandemic have been huge, there is an urgent need to acknowledge and address the “long-term challenge” – the impact on health caused by the economic and social inequalities associated with the pandemic.

It is estimated that over 10 years, the impact of inequalities will be six times greater than that of an unmitigated pandemic. Therefore, ‘building back better’ is essential in order to ensure long-term population health.


Returning to work: addressing unemployment after Covid-19

This webinar was also hosted by the Poverty Alliance as part of their wider webinar series on the pandemic.

The focus here was how to address the inevitable rise in unemployment following the pandemic – the anticipated increase in jobless numbers is currently estimated to be over three million.

The event included presentations from Kathleen Henehan, Research and Policy Analyst at Resolution Foundation, Anna Ritchie Allan, Executive Director at Close the Gap, and Tony Wilson, Director of the Institute for Employment Studies.

The webinar highlighted the unprecedented scale of the problem – noting that more than half of the working population are currently not working due to the pandemic, being either unemployed, furloughed or in receipt of self-employment support.

A key theme of the presentation was that certain groups are likely to be disproportionately affected by unemployment as the support provided by the government’s support schemes draw to a close later this year.  This includes women – particularly those from BAME groups, the lower paid and migrants – and young people.  So it’s essential that the support provided by the government in the form of skills, training, job creation schemes etc addresses this, and is both gender-sensitive and intersectional.


Supporting the return to educational settings of autistic children and young people

The aim of this webinar, provided by the National Autism Implementation Team (NAIT), was to offer a useful overview of how to support autistic children and young people, and those with additional support needs, back into educational settings following the pandemic.

Currently around 25% of learners in mainstream schools have additional support needs, and it is generally accepted that good autism practice is beneficial for all children.

The webinar set out eight key messages for supporting a successful return, which included making anticipatory adjustments rather than ‘waiting and seeing’, using visual supports, providing predictability, planning for movement breaks and provision of a ‘safe space’ for each child.  The importance of listening to parents was also emphasised.


P1050381.JPG

Ellisland Farm, Dumfries. “P1050381.JPG” by ejbluefolds is licensed under CC BY-NC 2.0

Burns at Ellisland

Our Research Officer, Donna Gardiner has also been following some cultural webinars, including one that focused on the links between Scotland’s national poet and the Ellisland Farm site. The webinar was led by Professor Gerard Carruthers, Francis Hutcheson Chair of Scottish Literature at the University of Glasgow and co-director of the Centre for Robert Burns Studies.

Robert Burns lived at Ellisland Farm in Dumfriesshire between May 1788 and November 1791, and is where he produced a significant proportion of his work – 23% of his letters and 28% of his songs and poems, including the famous Tam O’Shanter and Auld Lang Syne.

The presentation looked at how Robert Burns was influenced by the farm itself and its location on the banks of the River Nith.  It also touched on his involvement with local politics and friends in the area, which too influenced his work.

It was suggested that the Ellisland farm site could be considered in many ways to be the birthplace of wider European Romanticism. The webinar also included contributions from Joan McAlpine MSP, who is chair of the newly formed Robert Burns Ellisland Trust. She discussed how to help promote and conserve this historic site, particularly given the impact of the coronavirus on tourism.


Follow us on Twitter to see which topics are interesting our research team.

Facial recognition systems: ready for prime time?

by Scott Faulds

Across the UK, it is estimated that there are 1.85 million CCTV cameras, approximately one camera for every 36 people.  From shopping centres to railway stations, CCTV cameras have become a normal part of modern life and modern policing, with research from the College of Policing indicating that CCTV modestly reduces overall crime. Currently, most of the cameras utilised within the CCTV system are passive; they act as a deterrent or provide evidence of an individual’s past location or of a crime committed.

However, advances in artificial intelligence have allowed for the development of facial recognition systems which could enable CCTV cameras to proactively identify suspects or active crime in real-time. Currently, the use of facial recognition systems in limited pilots has received a mixed reaction, with the Metropolitan Police arguing that it is their duty to use new technologies to keep people safe. But privacy campaigners argue that the technology possesses a serious threat to civil liberties and are concerned that facial recognition systems contain gender and racial bias.

How does it work?

Facial recognition systems operate in a similar way to how humans recognise faces, through identifying familiar facial characteristics, but on a much larger and data driven way. Whilst there are a variety of different types of facial recognition system, the basic steps are as follows:

An image of a face is captured either within a photograph, video or live footage. The face can be within a crowd and does not necessarily have to be directly facing a camera.

Facial recognition software biometrically scans the face and converts unique facial characteristics (distance between your eyes, distance from forehead to chin etc) into a mathematical formula known as a facial signature.

The facial signature can then be compared to faces stored within a database (such as a police watchlist) or faces previously flagged by the system.

The system then determines if it believes it has identified a match; in most systems the level of confidence required before the system flags a match can be altered.

Facial recognition and the police

Over the past twelve months, the Metropolitan Police and South Wales Police have both operated pilots of facial recognition systems, designed to identify individuals wanted for serious and violent offences. These pilots involved the placement of facial recognition cameras in central areas, such as Westfield Shopping Centre, where large crowds’ faces were scanned and compared to a police watch-list. If the system flags a match, police officers would then ask the potential match to confirm their identify and if the match was correct, they would be detained. Police forces have argued that the public broadly support the deployment of facial recognition and believe that the right balance has been found between keeping the public safe and protecting individual privacy.

The impact of the deployment of facial recognition by the police has been compared by some to the introduction of fingerprint identification. However, it is difficult to determine how successful these pilots have been, as there has been a discrepancy regarding the reporting of the accuracy of these facial recognition systems. According to the Metropolitan Police, 70% of wanted suspects would be identified walking past facial recognition cameras, whilst only one in 1,000 people would generate a false alert, an error rate of 0.1%.  Conversely, independent analysis commissioned by the Metropolitan Police, has found that only eight out of 42 matches were verified as correct, an error rate of 81%.

The massive discrepancy in error rates can be explained by the way in which you asses the accuracy of a facial recognition system. The Metropolitan Police measure accuracy by comparing successful and unsuccessful matches with the total number of faces scanned by the facial recognition system. Independent researchers, on the other hand, asses the accuracy of the flags generated by the facial recognition system. Therefore, it is unclear as to how accurate facial recognition truly is, nevertheless, the Metropolitan Police have now begun to use live facial recognition cameras operationally.

Privacy and bias

Civil liberties groups, such as Liberty and Big Brother Watch, have a raised a variety of concerns regarding the police’s use of facial recognition. These groups argue that the deployment of facial recognition systems presents a clear threat to individual privacy and privacy as a social norm. Although facial recognition systems used by the police are designed to flag those on watch-lists, every single person that comes into the range of a camera will automatically have their face biometrically scanned. In particular, privacy groups have raised concerns about the use of facial recognition systems during political protests, arguing that their use may constitute a threat to the right to freedom of expression and may even represent a breach of human rights law. 

Additionally, concerns have been raised regarding racial and gender bias that have been found to be prevalent in facial recognition systems across the world. A recent evaluative study conducted by the US Government’s National Institute of Standards and Technology on 189 facial recognition algorithms has found that most algorithms exhibit “demographic differentials”. This means that a facial recognition system’s ability to match two images of the same person varies depending on demographic group. This study found that facial recognition systems were less effective at identifying BAME and female faces, this means that these groups are statistically more likely to be falsely flagged and potentially questioned by the police.

Final thoughts

From DNA to fingerprint identification, the police are constantly looking for new and innovative ways to help keep the public safe. In theory, the use of facial recognition is no different, the police argue that the ability to quickly identify a person of interest will make the public safer. However, unlike previous advancements, the effectiveness of facial recognition is largely unproven.

Civil liberties groups are increasingly concerned that facial recognition systems may infringe on the right to privacy and worry that their use will turn the public into walking biometric ID cards. Furthermore, research has indicated that the vast majority of facial recognition systems feature racial and gender bias, this could lead to women and BAME individuals experiencing repeated contact with the police due to false matches.

In summary, facial recognition systems provide the police with a new tool to help keep the public safe. However, in order to be effective and gain the trust of the public, it will be vital for the police to set out the safeguards put in place to prevent privacy violations and the steps taken to ensure that the systems do not feature racial and gender bias.  


Follow us on Twitter to see which topics are interesting our Research Officers this week.

If you enjoyed this article you may also like to read:

Icons made by monkik from www.flaticon.com

An app a day … how m-health could revolutionise our engagement with the NHS

It seems like almost every day now we see in the news and read in newspapers about the increasing pressures on our NHS, strains on resources and the daily challenges facing already overworked GP staff.

Mobile health applications (m-health apps) are increasingly being integrated into practice and are now being used to perform some tasks which would have traditionally been performed by general practitioners (GPs), such as those involved in promoting health, preventing disease, diagnosis, treatment, monitoring, and signposting to other health and support services.

How m-health is transforming patient interactions with the NHS

In 2015 International Longevity Centre research found some distinct demographic divides on health information seeking behaviour. While 50% of those aged 25-34 preferred to receive health information online, only 15% of those aged 65 and over preferred the internet. The internet remained the favourite source of health information for all age groups younger than 55. And while not specifically referring to apps, the fact that many people in this research expressed a preference to seek health information online indicates that there is potential for wider use of effective, and NHS approved health apps.

A report published in 2019 by Reform highlighted the unique opportunity that m-health offered in the treatment and management of mental health conditions. The report found that in the short to medium-term, much of the potential of apps and m-health lies in relieving the pressure on frontline mental health services by giving practitioners more time to spend on direct patient care and providing new ways to deliver low-intensity, ongoing support. In the long-term, the report suggests, data-driven technologies could lead to more preventative and precise care by allowing for new types of data-collection and analysis to enhance understandings of mental health.

M-health, e-health and telecare are also potentially important tools in the delivery of rural care, particularly to those who are elderly or who live in remote parts of the UK. This enables them to submit relevant readings to a GP or hospital consultant without having to travel to see them in person and allowing them to receive updates, information and advice on their condition without having to travel to consult a doctor or nurse face-to-face. However, some have highlighted that this removal of personal contact could leave some patients feeling isolated, unable to ask questions and impact on the likelihood of carrying out treatment, particularly among older people, if they feel it has been prescribed by a “machine” and not a doctor.

Supporting people to take ownership of their own health

Research has suggested that wearable technologies, not just m-health apps, but across-the-board, including devices like “fitbits”, are acting as incentives to help people self-regulate and promote healthier activities such as more walking or drinking more water. One study found that different tracking and monitoring tools that collect and analyse health and wellness data over time can inform consumers of their baseline activity level, encourage personal engagement in health and wellbeing, and ultimately lead to positive behavioural change. Another report from the International Longevity Centre also highlights the potential impact of apps on preventative healthcare; promoting behaviour change and encouraging people to make healthier choices such as stopping smoking or reducing alcohol intake.

Home testing kits for conditions such as bowel cancer and remote sensors to monitor blood sugar levels in type 1 diabetics are also becoming more commonplace as methods to help people take control of monitoring their own health. Roll-outs of blood pressure and heart rhythm monitors enable doctors to see results through an integrated tablet, monitor a patient’s condition remotely, make suggestions on changes to medication or pass comments on to patients directly through an email or integrated chat system, without the patient having to attend a clinic in person.

Individual test kits from private sector firms, including “Monitor My Health” are now also increasingly available for people to purchase. People purchase and complete the kits, which usually include instructions on home blood testing for conditions like diabetes, high cholesterol and vitamin D deficiency. The collected samples are then returned via post, analysed in a laboratory and the results communicated to the patient via an app, with no information about the test stored on their personal medical records. While the app results will recommend if a trip to see a GP is necessary, there is no obligation on the part of the company involved or the patient to act on the results if they choose not to. The kits are aimed at “time-poor” people over the age of 16, who want to “take control of their own healthcare”, according to the kit’s creator, but some have suggested that instead of improving the patient journey by making testing more convenient, lack of regulation could dilute the quality of testing Removing the “human element”, they warn, particularly from initial diagnosis consultations, could lead to errors.

But what about privacy?

Patient-driven healthcare which is supported and facilitated by the use of e-health technologies and m-health apps is designed to support an increased level of information flow, transparency, customisation, collaboration and patient choice and responsibility-taking, as well as quantitative, predictive and preventive aspects for each user. However, it’s not all positive, and concerns are already being raised about the collection and storage of data, its use and the security of potentially very sensitive personal data.

Data theft or loss is one of the major security concerns when it comes to using m-health apps. However, another challenge is the unwitting sharing of data by users, which despite GDPR requirements can happen when people accept terms and conditions or cookie notices without fully reading or understanding the consequences for their data. Some apps, for example, collect and anonymise data to feed into further research or analytics about the use of the app or sell it on to third parties for use in advertising.

Final thoughts

The integration of mobile technologies and the internet into medical diagnosis and treatment has significant potential to improve the delivery of health and care across the UK, easing pressure on frontline staff and services and providing more efficient care, particularly for those people who are living with long-term conditions which require monitoring and management.

However, clinicians and researchers have been quick to emphasise that while there are significant benefits to both the doctor and the patient, care must be taken to ensure that the integrity and trust within the doctor-patient relationship is maintained, and that people are not forced into m-health approaches without feeling supported to use the technology properly and manage their conditions effectively. If training, support and confidence of users in the apps is not there, there is the potential for the roll-out of apps to have the opposite effect, and lead to more staff answering questions on using the technology than providing frontline care.


Follow us on Twitter to see which topics are interesting our Research Officers this week.

If you enjoyed this article you may also like to read:

“We’ve updated our privacy policy”: GDPR two years on

by Scott Faulds

Almost two years ago, the General Data Protection Regulation (GDPR) came into force across the European Union (EU) and European Economic Area (EEA), creating what many consider to be the most extensive data protection regulation in the world. The introduction of GDPR facilitated the harmonisation of data privacy laws across Europe and provided citizens with greater control over how their data is used. The regulation sets out the rights of data subjects, the duties of data controllers/processors, the transfer of personal data to third countries, supervisory authorities, cooperation among member states, and remedies, liability or penalties for breach of rights. However, whilst the regulation itself is extensive, questions have been raised regarding the extent to which GDPR has been successful at protecting citizens’ data and privacy.

Breach Notifications and Fines

Critics of GDPR have argued that whilst the regulation has been effective as a breach notification law, it has so far failed to impose impactful fines on companies which have failed to comply with the GDPR. National data protection authorities (such as the Information Commissioner’s Office (ICO) in the UK) under the GDPR have the ability to impose fines of up to €20m or up to 4% of an organisation’s total global turnover, whichever is higher. Since the introduction of the GDPR, data protection authorities across the EEA have experienced a “massive increase” in reports of data breaches. However, this has yet to translate into substantive financial penalties. For example, Google has been issued a €50m fine, the highest issued so far* by CNIL, the French data protection authority. CNIL found that Google failed to provide sufficient and transparent information that allowed customers to give informed consent to the processing of personal data when creating a Google account during the set-up process of an Android powered device. This is a serious breach of multiple GDPR articles and CNIL argued that the infringements contravene the principles of transparency and informed consent which are at the heart of the GDPR.

*  The confirmation of record fines issued by ICO to British Airways (£183m) and Marriott International (£99m) has been delayed until 31st March 2020.

However, the fine imposed on Google amounts to approximately 0.04% of their total global turnover, which some have argued is simply too small an amount to act as any real deterrent. Therefore, it could be said that while GDPR has been effective in encouraging companies to be transparent when data misuse occurs, national data protection authorities have yet to make use of their ability to impose large financial penalties to act as a deterrent.

In recent months, the German and Dutch data protection authorities have both created frameworks which set out how they intend to calculate GDPR fines. Analysis of their fining structures indicates that both models will operate based on the severity of GDPR violation. However, both structures allow for the data protection authority to impose the maximum fine if the amount is not deemed fitting. The International Association of Privacy Professionals believes this will result in significantly higher and more frequent fines than those issued previously, and has suggested that it is possible that the European Data Protection Board may consider implementing a harmonized fine model across Europe.

Brussels Effect

The effects of the GDPR can be felt beyond Europe, with companies such as Apple and Microsoft committing to extend GDPR protections to their entire customer base, no matter their location.  Even the COO of Facebook, Sheryl Sandberg, admitted that the introduction of GDPR was necessary due to the scale of data collected by technology companies. The ability of the EU to influence the global regulatory environment has been described by some experts as the “Brussels Effect”. They argue that a combination of the size, importance and regulatory power of the EU market is forcing companies around the world to match EU regulations. Additionally, this effect can be seen to be influencing data protection legislation across the world, with governments in Canada, Japan, New Zealand, Brazil, South Africa and California all introducing updated privacy laws based on the GDPR. As a result, it can be said that the introduction of the GDPR has enabled the EU to play a key role in global discussions regarding privacy and how citizens’ data is used worldwide. 

Brexit

Following the UK’s exit from the EU, the GDPR will remain in force until the end of the transition period (31st December 2020), after this point it is the intention of the UK Government to introduce the UK GDPR. However, as the UK will no longer be a member state of the EU, it will require to seek what is known as an “adequacy agreement” with the EU.This allows businesses in the EEA and UK to freely exchange data. The UK government believes that this agreement will be signed during the transition period, as the UK GDPR is not materially different from the EU GDPR. However, it should be noted that the most recent adequacy agreement between the EU and Japan took two years to complete.

Final Thoughts

The introduction of the GDPR almost two years ago has had a variety of impacts on the current discussion surrounding privacy and how best to protect our personal data. Firstly, the GDPR has forced companies to become more transparent when data misuse occurs and gives national data protection authorities the power to scrutinise companies’ approaches to securing personal data. Secondly, the influence of the GDPR has helped to strengthen privacy laws across the world and has forced companies to provide individuals with more control over how their data is used. However, the effectiveness of GDPR is limited due to a lack of common approach regarding fines in relation to GDPR violations. In order to develop fully, it will be important for the European Data Protection Board to provide guidance on how to effectively fine those who breach the GDPR.


If you enjoyed this post, you may also like some of our other posts related to GDPR:

Follow us on Twitter to see what topics are interesting our research team.

How AI is transforming local government

Robot

By Steven McGinty

Last year, Scottish Local Government Chief Digital Officer Martyn Wallace spoke to the CIO UK podcast and highlighted that in 2019 local government must take advantage of artificial intelligence (AI) to deliver better outcomes for citizens. He explained:

“I think in the public sector we have to see AI as a way to deliver better outcomes and what I mean by that is giving the bots the grunt work – as one coworker called it, ‘shuffling spreadsheets’ – and then we can release staff to do the more complex, human-touch things.”

To date, very few councils have felt brave enough to invest in AI. However, the mood is slowly starting to change and there are several examples in the UK and abroad that show artificial intelligence is not just a buzzword, but a genuine enabler of change.

In December, Local Government Minister Rishi Sunak announced the first round of winners from a £7.5million digital innovation fund. The 16 winning projects, from 57 councils working in collaborative teams, were awarded grants of up to £100,000 to explore the use of a variety of digital technologies, from Amazon Alexa style virtual assistants to support people living in care, to the use of data analytics to improve education plans for children with special needs.

These projects are still in their infancy, but there are councils who are further along with artificial intelligence, and have already learned lessons and had measurable successes. For instance, Milton Keynes Council have developed a virtual assistant (or chatbot) to help respond to planning-related queries. Although still at the ‘beta’ stage, trials have shown that the virtual assistant is better able to validate major applications, as these are often based on industry standards, rather than household applications, which tend to be more wide-ranging.

Chief planner, Brett Leahy, suggests that introducing AI will help planners focus more on substantive planning issues, such as community engagement, and let AI “take care of the constant flow of queries and questions”.

In Hackney, the local council has been using AI to identify families that might benefit from additional support. The ‘Early Help Predictive System’ analyses data related to (among others) debt, domestic violence, anti-social behaviour, and school attendance, to build a profile of need for families. By taking this approach, the council believes they can intervene early and prevent the need for high cost support services. Steve Liddicott, head of service for children and young people at Hackney council, reports that the new system is identifying 10 or 20 families a month that might be of future concern. As a result, early intervention measures have already been introduced.

In the US, the University of Chicago’s initiative ‘Data Science for Social Good’ has been using machine learning (a form of AI) to help a variety of social-purpose organisations. This has included helping the City of Rotterdam to understand their rooftop usage – a key step in their goal to address challenges with water storage, green spaces and energy generation. In addition, they’ve also helped the City of Memphis to map properties in need of repair, enabling the city to create more effective economic development initiatives.

Yet, like most new technologies, there has been some resistance to AI. In December 2017, plans by Ofsted to use machine learning tools to identify poorly performing schools were heavily criticised by the National Association of Head Teachers. In their view, Ofsted should move away from a data-led approach to inspection and argued that it was important that the “whole process is transparent and that schools can understand and learn from any assessment.”

Further, hyperbole-filled media reports have led to a general unease that introducing AI could lead to a reduction in the workforce. For example, PwC’s 2018 ‘UK Economic Outlook’ suggests that 18% of public administration jobs could be lost over the next two decades. Although its likely many jobs will be automated, no one really knows how the job market will respond to greater AI, and whether the creation of new jobs will outnumber those lost.

Should local government investment in AI?

In the next few years, it’s important that local government not only considers the clear benefits of AI, but also addresses the public concerns. Many citizens will be in favour of seeing their taxes go further and improvements in local services – but not if this infringes on their privacy or reduces transparency. Pilot projects, therefore, which provide the opportunity to test the latest technologies, work through common concerns, and raise awareness among the public, are the best starting point for local councils looking to move forward with this potentially transformative technology.


Follow us on Twitter to discover which topics are interesting our research team.

Protecting privacy in the aftermath of the Facebook-Cambridge Analytica scandal

By Steven McGinty

On 4 June, Information Commissioner Elizabeth Denham told MEPs that she was ‘deeply concerned’ about the misuse of social media users’ data.

She was speaking at the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) inquiry into the use of 87 million Facebook profiles by Cambridge Analytica and its consequences for data protection and the wider democratic process. The whole affair has shone a light on how Facebook collected, shared, and used data to target people with political and commercial advertising. And, in a warning to social media giants, she announced:

Online platforms can no longer say that they are merely a platform for content; they must take responsibility for the provenance of the information that is provided to users.”

Although this is tough talk from the UK’s guardian of information rights – and many others, including politicians, have used similar language – the initial response from the Information Commissioner was hardly swift.

The Information Commissioners Office (ICO) struggled at the first hurdle, failing to secure a search warrant for Cambridge Analytica’s premises. Four days after the Elizabeth Denham announced her intention to raid the premises, she was eventually granted a warrant following a five-hour hearing at the Royal Courts of Justice. This delay – and concerns over the resources available to the ICO – led commentators to question whether the regulator has sufficient powers to tackle tech giants such as Facebook.

Unsurprisingly, it was not long before the Information Commissioner went into “intense discussion” with the government to increase the powers at her disposal. At a conference in London, she explained:

Of course, we need to respect the rights of companies, but we also need streamlined warrant processes with a lower threshold than we currently have in the law.”

Conservative MP, Damien Collins, Chair of the Digital, Culture, Media and Sport select committee, expressed similar sentiments, calling for new enforcement powers to be included in the Data Protection Bill via Twitter:

Eventually, after a year of debate, the Data Protection Act 2018 was passed on the 23 May. On the ICO blog, Elizabeth Denham welcomed the new law, highlighting that:

The legislation requires increased transparency and accountability from organisations, and stronger rules to protect against theft and loss of data with serious sanctions and fines for those that deliberately or negligently misuse data.”

By introducing this Act, the UK Government is attempting to address a number of issues. However, the Information Commissioner, will be particularly pleased that she’s received greater enforcement powers, including creating two new criminal offences: the ‘alteration etc of personal data to prevent disclosure‘ and the ‘re-identification of de-identified personal data’.

GDPR

On 25 May, the long awaited General Data Protection Regulation (GDPR) came into force. The Data Protection Act incorporates many of the provisions of GDPR, such as the ability to levy heavy fines on organisations (up to €20,000,000 or 4% of global turnover). The Act also derogates from EU law in areas such as national security and the processing of immigration-related data. The ICO recommend that GDPR and the Data Protection Act 2018 are read side by side.

However, not everyone is happy with GDPR and the new Data Protection Act. Tomaso Falchetta, head of advocacy and policy at Privacy International, has highlighted that although they welcome the additional powers given to the Information Commissioner, there are concerns over the:

wide exemptions that undermine the rights of individuals, particularly with a wide exemption for immigration purposes and on the ever-vague and all-encompassing national security grounds”.

In addition, Dominic Hallas, executive director of The Coalition for a Digital Economy (Coadec), has warned that we must avoid a hasty regulatory response to the Facebook-Cambridge Analytica scandal. He argues that although it’s tempting to hold social media companies liable for the content of users, there are risks in taking this action:

Pushing legal responsibility onto firms might look politically appealing, but the law will apply across the board. Facebook and other tech giants have the resources to accept the financial risks of outsized liability – startups don’t. The end result would entrench the positions of those same companies that politicians are aiming for and instead crush competitors with fewer resources.

Final thoughts

The Facebook-Cambridge Analytical scandal has brought privacy to the forefront of the public’s attention. And although the social media platform has experienced minor declining user engagement and the withdrawal of high profile individuals (such as inventor Elon Musk), its global presence and the convenience it offers to users suggests it’s going to be around for some time to come.

Therefore, the ICO and other regulators must work with politicians, tech companies, and citizens to have an honest debate on the limits of privacy in a world of social media. The GDPR and the Data Protection Act provide a good start in laying down the ground rules. However, in the ever-changing world of technology, it will be important that this discussion continues to find solutions to future challenges. Only then will we avoid walking into another global privacy scandal.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team. 

If you found this article interesting, you may also like to read our other digital articles.

How data leaks can bring down governments

Swedish Parliament building

By Steven McGinty

In July 2017, the Swedish Government faced a political crisis after admitting a huge data leak that affected almost all of its citizens.

The leak, which dates back to a 2015 outsourcing contract between the Swedish Transport Agency and IBM Sweden, occurred when IT contractors from Eastern Europe were allowed access to confidential data without proper security clearance. Media reports suggested that the exposed data included information about vehicles used by the armed forces and the police, as well as the identities of some security and military personnel.

The political fallout was huge for Sweden’s minority government. Infrastructure Minister Anna Johansson and Interior Minister Anders Ygeman both lost their positions, whilst the former head of the transport agency, Maria Ågren, was found to have been in breach of the country’s privacy and data protection laws when she waived the security clearance of foreign IT workers. In addition, the far-right Sweden Democrats were calling for an early election and Prime Minister Stefan Löfven faced a vote of no-confidence in parliament (although he easily survived).

However, it’s not just Sweden where data leaks have become political. Last year, the UK saw several high-profile incidents.

Government Digital Service (GDS)

The UK Government’s main data site incorrectly published the email addresses and “hashed passwords” of its users. There was no evidence that data had been misused, but the GDS recommended that users change their password as a precaution. And although users did not suffer any losses, it’s certainly embarrassing for the agency responsible for setting the UK’s digital agenda.

Scottish Government

Official documents revealed that Scottish Government agencies experienced “four significant data security incidents” in 2016-17. Three out of four of these cases breached data protection legislation.

Disclosure Scotland, a body which often deals with highly sensitive information through its work vetting individuals’, was one organisation that suffered a data leak. This involved a member of staff sending a mass email, in which email addresses could be viewed by all the recipients (a breach of the Data Protection Act).

Murdo Fraser, MSP for the Scottish Conservatives, criticised the data breaches, warning:

These mistakes are entirely the fault of the Scottish government and, worryingly, may signal security weaknesses that hackers may find enticing.”

Hacking parliaments

In the summer of 2017, the UK parliament suffered a ‘brute force’ attack, resulting in 90 email accounts with weak passwords being hacked and part of the parliamentary email system being taken offline. A few months later, the Scottish Parliament experienced a similar sustained attack on parliamentary email accounts. MPs have suggested Russia or North Korea could be to be blame for both attacks.

MPs sharing passwords

In December 2017, the Information Commissioner warned MPs over sharing passwords. This came after a number of Conservative MPs admitted they shared passwords with staff. Conservative MP Nadine Dorries explained:

My staff log onto my computer on my desk with my login every day. Including interns on exchange programmes.”

Their remarks were an attempt to defend the former First Secretary of State, Damian Green, over allegations he watched pornography in his parliamentary office.

Final thoughts

The Swedish data leak shows the political consequences of failing to protect data. The UK’s data leaks have not led to the same level of political scrutiny, but it’s important that UK politicians stay vigilant and ensure data protection is a key priority. Failure to protect citizen data may not only have financial consequences for citizens, but could also erode confidence in public institutions and threaten national security.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team. 

Drones in the city: should we ban drone hobbyists?

A young boy flying a drone

By Steven McGinty

Drones are becoming an increasingly observable feature of modern cities, from tech enthusiasts flying drones in local parks to engineers using them to monitor air pollution. And there have also been some high profile commercial trials such as Amazon Prime Air, an ambitious 30-minute delivery service.

However, introducing drones into the public realm has been something of a bumpy ride. Although the Civil Aviation Authority (CAA) produces guidance to ensure drones are flown safely and legally, there has been a number of hazardous incidents.

For example, in April, the first near-miss involving a passenger jet and more than one drone was recorded. The incident at Gatwick Airport saw two drones flying within 500m of an Airbus A320, with one pilot reporting a “significant risk of collision” had they been on a different approach path. In addition – and just 30 minutes later – one of these drones flew within 50m of another passenger jet, a Boeing 777.

Videos have also been uploaded to websites such as YouTube, which have clearly been taken from drones – a clear breach of the CAA’s rules prohibiting the flying of drones over or within 150m of built-up areas. This includes events such as the Cambridge Folk Festival, a match at Liverpool FC’s Anfield Stadium, and Nottingham’s Goose Fair. Jordan Brooks, who works for Upper Cut Productions – a company which specialises in using drones for aerial photography and filming – explains that:

They look like toys. For anyone buying one you feel like you’re flying a toy ‘copter when actually you’ve got a hazardous helicopter that can come down and injure somebody.

Privacy concerns have also started to emerge. Sally Annereau, data protection analyst at law firm Taylor Wessing, highlights a recent European case which held that a suspect’s rights had been infringed by a homeowner’s CCTV recording him whilst he was in a public place. Although not specifically about drones, Sally Annereau suggests this decision will have far reaching consequences, with potential implications for drone users recording in public and sharing their footage on social media sites. The Information Commissioner’s Office (ICO) has already issued guidance for drones.

The CAA report that there were more than 3,456 incidents involving drones in 2016. This is a significant increase on the 1,237 incidents in 2015.

The response

Cities have often taken contradictory approaches to drones. Bristol City Council has banned their use in the majority of its parks and open spaces. Similarly, several London boroughs have introduced ‘no drone zones’, although the London Borough of Richmond upon Thames has a relatively open policy, only banning drones over Richmond Park. Further, Lambeth Council requires hobbyists to complete an application form “to ensure suitability”, a standard similar to commercial drone pilots.

There have also been several accusations of double standards as large commercial operators such as Amazon receive exemptions to CAA rules, in front of photographers recording events, hospitals delivering blood, and researchers collecting data.

Although cities have a responsibility to protect the public, they also have to ensure citizens are able to exercise their rights. The air is a common space, and as such cities must ensure that hobbyists – as well as multinational firms – can enjoy the airspace. Thus, it might be interesting to see cities take a more positive approach and designate ‘drone zones’, where hobbyists can get together and fly their drones away from potential hazards.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team. 

Smart Chicago: how smart city initiatives are helping meet urban challenges

Outside a Chicago theatre, with a huge 'Chicago' sign outside

By Steven McGinty

Home to former President Barack Obama, sporting giants the Chicago Bulls, and the culinary delicacy deep dish pizza, Chicago is one of the most famous cities in the world. Less well known is Chicago’s ambition to become the most data-driven city in the world.

A late convert to the smart city agenda, Chicago was lagging behind local rivals New York and Boston, and international leaders Barcelona, Amsterdam, and Singapore.

But in 2011, Chicago’s new Mayor Rahm Emanuel outlined the important role technology needed to play, if the city was to address its main challenges.

Laying the groundwork – open data and tech plan

In 2012, Mayor Rahm Emanuel issued an executive order establishing the city’s open data policy. The order was designed to increase transparency and accountability in the city, and to empower citizens to participate in government, solve social problems, and promote economic growth. It required that every city agency would contribute data to it and established reporting requirements to ensure agencies were held accountable.

Chicago’s open data portal has nearly 600 datasets, which is more than double the number in 2011. The city works closely with civic hacker group Open Chicago, an organisation which runs hackathons (collaborations between developers and businesses using open data to find solutions to city problems).

In 2013, the City of Chicago Technology Plan was released. This brought together 28 of the city’s technology initiatives into one policy roadmap, setting them out within five broad strategic areas:

  • Establishing next-generation infrastructure
  • Creating smart communities
  • Ensuring efficient, effective, and open government
  • Working with innovators to develop solutions to city challenges
  • Encouraging Chicago’s technology sector

 Array of Things

The Array of Things is an ambitious programme to install 500 sensors throughout the city of Chicago. Described by the project team as a ‘fitness tracker for the city’, the sensors will collect real-time data on air quality, noise levels, temperature, light, pedestrian and vehicle traffic, and the water levels on streets and gutters. The data gathered will be made publicly available via the city’s website, and will provide a vital resource for the researchers, developers, policymakers, and citizens trying to address city challenges.

This new initiative is a major project for the city, but as Brenna Berman, Chicago’s chief information officer, explains:

If we’re successful, this data and the applications and tools that will grow out of it will be embedded in the lives of residents, and the way the city builds new services and policies

Potential applications for the city’s data could include providing citizens with information on the healthiest and unhealthiest walking times and routes through the city, as well as the areas likely to be impacted by urban flooding.

The project is led by the Urban Center for Computation and Data of the Computation Institute  a joint initiative of Argonne National Laboratory and the University of Chicago. However, a range of partners are involved in the project, including several universities, the City of Chicago who provide an important governance role and technology firms, such as Product Development Technologies, the company who built the ‘enclosures’ which protect the sensors from environmental conditions.

A series of community meetings was held to introduce the Array of Things concept to the community and to consult on the city’s governance and privacy policy. This engagement ranged from holding public meetings in community libraries to providing online forms, where citizens could provide feedback anonymously.

In addition, the Urban Center for Computation and Data and the School of the Art Institute of Chicago ran a workshop entitled the “Lane of Things”, which introduced high school students to sensor technology. The workshop is part of the Array of Things education programme, which aims to use sensor technology to teach students about subjects such as programming and data science. For eight weeks, the students were given the opportunity to design and build their own sensing devices and implement them in the school environment, collecting information such as dust levels from nearby construction and the dynamics of hallway traffic.

The Array of Things project is funded by a $3.1 million National Science Foundation grant and is expected to be complete by 2018.

Mapping Subterranean Chicago

The City of Chicago is working with local technology firm, City Digital, to produce a 3D map of the underground infrastructure, such as water pipes, fibre optic lines, and gas pipes. The project will involve engineering and utility workers taking digital pictures as they open up the streets and sidewalks of Chicago. These images will then be scanned into City Digital’s underground infrastructure mapping (UIM) platform, and key data points will be extracted from the image, such as width and height of pipes, with the data being layered on a digital map of Chicago.

According to Brenna Berman:

By improving the accuracy of underground infrastructure information, the platform will prevent inefficient and delayed construction projects, accidents, and interruptions of services to citizens.

Although still at the pilot stage, the technology has been used on one construction site and an updated version is expected to be used on a larger site in Chicago’s River North neighbourhood. Once proven, the city plans to charge local construction and utility firms to access the data, generating income whilst reducing the costs of construction and improving worker safety.

ShotSpotter

In January, Mayor Rahm Emanuel and Chicago Police Department commanders announced the expansion of ShotSpotter – a system which uses sensors to capture audio of gunfire and alert police officers to its exact location. The expansion will take place in the Englewood and Harrison neighbourhoods, two of the city’s highest crime areas, and should allow police officers to respond to incidents more rapidly.

Chicago Police Superintendent Eddie Johnson highlights that although crime and violence presents a complex problem for the city, the technology has resulted in Englewood going “eight straight days without a shooting incident”, the longest period in three years.

ShotSpotter will also be integrated into the city’s predictive analytics tools, which are used to assess how likely individuals are to become victims of gun crime, based on factors such as the number of times they have been arrested with individuals who have become gun crime victims.

Final thoughts

Since 2011, Chicago has been attempting to transform itself into a leading smart city. Although it’s difficult to compare Chicago with early adopters such as Barcelona, the city has clearly introduced a number of innovative projects and is making progress on their smart cities journey.

In particular, the ambitious Array of Things project will have many cities watching to see if understanding the dynamics of city life can help to solve urban challenges.


Follow us on Twitter to see what developments in public and social policy are interesting our research team.

If you found this article interesting, you may also like to read our other smart cities articles:

General Data Protection Regulation (GDPR): 10 things business needs to know

 

European Union flag with a padlock in the centre.

By Steven McGinty

On 25 May 2018, the data protection landscape will experience its biggest change in over 20 years.  This is because the European Union’s (EU) General Data Protection Regulation (GDPR) will come into effect for all member states. The regulation, which has been described as ‘ambitious’ and ‘wide-ranging’, introduces a number of new concepts, including the high profile ‘right to be forgotten’ – a principle established in a case involving technology giant Google.

Below we’ve highlighted ten of the most important points for business.

Directly effective

The GDPR is ‘directly effective’, which means that the regulation becomes law without the need for additional domestic legislation (replacing the Data Protection Act 1998). However, member states have also been given scope to introduce their own legislation on matters such as the processing of personal data. This may result in some EU states having more stringent rules than others.

Sharing data and monitoring

It also seeks to increase the reach of EU data protection law. Not only will EU-based data controllers and processors fall under the scope of the GDPR, but its authority will also extend to any business which either processes personal data or monitors the behaviour of individuals within the EU.

This will impact businesses who transfer data outside the European Economic Area (EEA). It will be their responsibility to ensure that the country the data is being transferred to has adequate levels of data protection. The most prominent example of this issue was the US Safe Harbour scheme, which was intended to protect European individuals whose personal data is transferred between the EEA and the USA. In 2015, the European Court of Justice ruled that this scheme had ceased to provide a valid legal basis for EEA-US transfers of all types of personal data. It has now been replaced by the Privacy Shield.

Transparency and consent

Greater obligations have been placed on business with regard both to seeking consent for use of personal data and providing detailed information to individuals on how their personal data is being used. The GDPR requires that consent notices are ‘unambiguous’ – not assumed from a person’s failure to respond – and that consent is sought for different processing activities. Law firm, Allen and Overy recommends that businesses review their notices to ensure they are fit for purpose.

Personal data/ sensitive data

Article 4(1) of the GDPR includes a broader definition of ‘personal data’ than previous legislation. It states that any information relating to an individual which can be directly or indirectly used to identify them is personal data. Specifically, it refers to ‘online identifiers’, which suggests that IP addresses and cookies may be considered personal data if they can be easily linked back to the person.

Enhanced rights

New rights and the enhancement of existing rights will require some businesses to improve the way their data is stored and managed. These rights include:

  • Data portability – Business must ensure that individuals can have easy access to their personal data in case they want to transfer their data to other systems.
  • Strengthening subject access rights – Individuals can now request access to their data for no cost and it must be responded to within 30 days (this is a change from the current legislation which requires a £10 fee and there is 40 days to respond).
  • Right to be forgotten – Individuals can request that an organisation delete all the information they hold on them (although this would not apply if there was a valid reason to hold that data).
  • Right to object to processing – Individuals have the right to object to the way an organisation is processing their data.
  • Right to restrict processing – Individuals have the right to request that the processing of personal data is temporarily stopped. This may be invoked whilst a right to object request is being investigated.

Personal data breach

Businesses have an obligation to report breaches to their national regulator, such as the Information Commissioners Office (ICO) in England.  The GDPR requires that notice must be provided “without undue delay and, where feasible, not later than 72 hours after having become aware of it.” This may be challenging for some businesses, particularly if the incident is discovered at the end of the working week.

Failure to comply with GDPR

The regulation introduces two levels of fines. Less serious offences under the regulation will be liable for a fine of up to €10,000,000 or 2% of global turnover – depending on which is highest. However, for more serious breaches, such as a breach of an individual’s rights or a breach during international transfers, a business may be held liable for up to €20,000,000 or 4% of global turnover.

In addition, individuals are also given the right of redress, and those who have had their rights violated may seek to receive compensation. This has led digital marketers to suggest that GDPR could be the next PPI – a practice where insurance was mis-sold to customers, which resulted in a large number of successful claims against financial institutions.

Privacy by design

Technology businesses should also consider data protection at the initial design stage of product development. This could involve adopting technical measures such as pseudonymisation – the technique of processing personal data in such a way that it can no longer identify a particular person. Additional measures, such as policies and programmes, would also show a national regulator’s commitment to compliance with the GDPR.

European Data Protection Board (EDPB)

A new body has been created to issue opinions and to arbitrate between disputes that arise with national regulators.  The board will be made up of heads of national regulatory bodies (or their representatives) and the European Data Protection Supervisor (EDPS), who govern the data processing activities of EU institutions. The opinions expressed by this board may have important implications for data protection legislation.

Impact of Brexit

Evidence suggests some businesses may be delaying taking action until they see the results of the Brexit negotiations. This possibly explains the research by cloud security firm, Netskope, which found that 63% of UK workers have never heard of the GDPR. Similarly, research by Veritas Technologies, a leading information management firm, has found that 54% of organisations have not ensured they will comply with the new GDPR.

However, it would be very surprising if the UK did not ‘mirror’ the protections offered by the regulation, particularly considering the UK’s significant input to the new legislation. Digital minister Matt Hancock has also confirmed that the UK government intends to fully implement the GDPR.

Final thoughts

If businesses already have policy and procedures in place to meet the requirements of the Data Protection Act, then they should have a solid foundation to comply with the GDPR. In many ways, the new regulation simply provides a clear framework for delivering good practice in data protection.

However, all businesses will need to take action to ensure compliance with the GDPR. Otherwise, the financial penalties (as well as reputational damage) of a breach could have serious consequences for their business. And this is not just an IT issue. The whole organisation, starting from board level, must show a willingness to understand the legislation and implement procedures that protect the fundamental rights of individuals.


Follow us on Twitter to see what developments in public and social policy are interesting our research team. If you found this article interesting, you may also like to read our other data-related articles