“We’ve updated our privacy policy”: GDPR two years on

by Scott Faulds

Almost two years ago, the General Data Protection Regulation (GDPR) came into force across the European Union (EU) and European Economic Area (EEA), creating what many consider to be the most extensive data protection regulation in the world. The introduction of GDPR facilitated the harmonisation of data privacy laws across Europe and provided citizens with greater control over how their data is used. The regulation sets out the rights of data subjects, the duties of data controllers/processors, the transfer of personal data to third countries, supervisory authorities, cooperation among member states, and remedies, liability or penalties for breach of rights. However, whilst the regulation itself is extensive, questions have been raised regarding the extent to which GDPR has been successful at protecting citizens’ data and privacy.

Breach Notifications and Fines

Critics of GDPR have argued that whilst the regulation has been effective as a breach notification law, it has so far failed to impose impactful fines on companies which have failed to comply with the GDPR. National data protection authorities (such as the Information Commissioner’s Office (ICO) in the UK) under the GDPR have the ability to impose fines of up to €20m or up to 4% of an organisation’s total global turnover, whichever is higher. Since the introduction of the GDPR, data protection authorities across the EEA have experienced a “massive increase” in reports of data breaches. However, this has yet to translate into substantive financial penalties. For example, Google has been issued a €50m fine, the highest issued so far* by CNIL, the French data protection authority. CNIL found that Google failed to provide sufficient and transparent information that allowed customers to give informed consent to the processing of personal data when creating a Google account during the set-up process of an Android powered device. This is a serious breach of multiple GDPR articles and CNIL argued that the infringements contravene the principles of transparency and informed consent which are at the heart of the GDPR.

*  The confirmation of record fines issued by ICO to British Airways (£183m) and Marriott International (£99m) has been delayed until 31st March 2020.

However, the fine imposed on Google amounts to approximately 0.04% of their total global turnover, which some have argued is simply too small an amount to act as any real deterrent. Therefore, it could be said that while GDPR has been effective in encouraging companies to be transparent when data misuse occurs, national data protection authorities have yet to make use of their ability to impose large financial penalties to act as a deterrent.

In recent months, the German and Dutch data protection authorities have both created frameworks which set out how they intend to calculate GDPR fines. Analysis of their fining structures indicates that both models will operate based on the severity of GDPR violation. However, both structures allow for the data protection authority to impose the maximum fine if the amount is not deemed fitting. The International Association of Privacy Professionals believes this will result in significantly higher and more frequent fines than those issued previously, and has suggested that it is possible that the European Data Protection Board may consider implementing a harmonized fine model across Europe.

Brussels Effect

The effects of the GDPR can be felt beyond Europe, with companies such as Apple and Microsoft committing to extend GDPR protections to their entire customer base, no matter their location.  Even the COO of Facebook, Sheryl Sandberg, admitted that the introduction of GDPR was necessary due to the scale of data collected by technology companies. The ability of the EU to influence the global regulatory environment has been described by some experts as the “Brussels Effect”. They argue that a combination of the size, importance and regulatory power of the EU market is forcing companies around the world to match EU regulations. Additionally, this effect can be seen to be influencing data protection legislation across the world, with governments in Canada, Japan, New Zealand, Brazil, South Africa and California all introducing updated privacy laws based on the GDPR. As a result, it can be said that the introduction of the GDPR has enabled the EU to play a key role in global discussions regarding privacy and how citizens’ data is used worldwide. 

Brexit

Following the UK’s exit from the EU, the GDPR will remain in force until the end of the transition period (31st December 2020), after this point it is the intention of the UK Government to introduce the UK GDPR. However, as the UK will no longer be a member state of the EU, it will require to seek what is known as an “adequacy agreement” with the EU.This allows businesses in the EEA and UK to freely exchange data. The UK government believes that this agreement will be signed during the transition period, as the UK GDPR is not materially different from the EU GDPR. However, it should be noted that the most recent adequacy agreement between the EU and Japan took two years to complete.

Final Thoughts

The introduction of the GDPR almost two years ago has had a variety of impacts on the current discussion surrounding privacy and how best to protect our personal data. Firstly, the GDPR has forced companies to become more transparent when data misuse occurs and gives national data protection authorities the power to scrutinise companies’ approaches to securing personal data. Secondly, the influence of the GDPR has helped to strengthen privacy laws across the world and has forced companies to provide individuals with more control over how their data is used. However, the effectiveness of GDPR is limited due to a lack of common approach regarding fines in relation to GDPR violations. In order to develop fully, it will be important for the European Data Protection Board to provide guidance on how to effectively fine those who breach the GDPR.


If you enjoyed this post, you may also like some of our other posts related to GDPR:

Follow us on Twitter to see what topics are interesting our research team.

How AI is transforming local government

Robot

By Steven McGinty

Last year, Scottish Local Government Chief Digital Officer Martyn Wallace spoke to the CIO UK podcast and highlighted that in 2019 local government must take advantage of artificial intelligence (AI) to deliver better outcomes for citizens. He explained:

“I think in the public sector we have to see AI as a way to deliver better outcomes and what I mean by that is giving the bots the grunt work – as one coworker called it, ‘shuffling spreadsheets’ – and then we can release staff to do the more complex, human-touch things.”

To date, very few councils have felt brave enough to invest in AI. However, the mood is slowly starting to change and there are several examples in the UK and abroad that show artificial intelligence is not just a buzzword, but a genuine enabler of change.

In December, Local Government Minister Rishi Sunak announced the first round of winners from a £7.5million digital innovation fund. The 16 winning projects, from 57 councils working in collaborative teams, were awarded grants of up to £100,000 to explore the use of a variety of digital technologies, from Amazon Alexa style virtual assistants to support people living in care, to the use of data analytics to improve education plans for children with special needs.

These projects are still in their infancy, but there are councils who are further along with artificial intelligence, and have already learned lessons and had measurable successes. For instance, Milton Keynes Council have developed a virtual assistant (or chatbot) to help respond to planning-related queries. Although still at the ‘beta’ stage, trials have shown that the virtual assistant is better able to validate major applications, as these are often based on industry standards, rather than household applications, which tend to be more wide-ranging.

Chief planner, Brett Leahy, suggests that introducing AI will help planners focus more on substantive planning issues, such as community engagement, and let AI “take care of the constant flow of queries and questions”.

In Hackney, the local council has been using AI to identify families that might benefit from additional support. The ‘Early Help Predictive System’ analyses data related to (among others) debt, domestic violence, anti-social behaviour, and school attendance, to build a profile of need for families. By taking this approach, the council believes they can intervene early and prevent the need for high cost support services. Steve Liddicott, head of service for children and young people at Hackney council, reports that the new system is identifying 10 or 20 families a month that might be of future concern. As a result, early intervention measures have already been introduced.

In the US, the University of Chicago’s initiative ‘Data Science for Social Good’ has been using machine learning (a form of AI) to help a variety of social-purpose organisations. This has included helping the City of Rotterdam to understand their rooftop usage – a key step in their goal to address challenges with water storage, green spaces and energy generation. In addition, they’ve also helped the City of Memphis to map properties in need of repair, enabling the city to create more effective economic development initiatives.

Yet, like most new technologies, there has been some resistance to AI. In December 2017, plans by Ofsted to use machine learning tools to identify poorly performing schools were heavily criticised by the National Association of Head Teachers. In their view, Ofsted should move away from a data-led approach to inspection and argued that it was important that the “whole process is transparent and that schools can understand and learn from any assessment.”

Further, hyperbole-filled media reports have led to a general unease that introducing AI could lead to a reduction in the workforce. For example, PwC’s 2018 ‘UK Economic Outlook’ suggests that 18% of public administration jobs could be lost over the next two decades. Although its likely many jobs will be automated, no one really knows how the job market will respond to greater AI, and whether the creation of new jobs will outnumber those lost.

Should local government investment in AI?

In the next few years, it’s important that local government not only considers the clear benefits of AI, but also addresses the public concerns. Many citizens will be in favour of seeing their taxes go further and improvements in local services – but not if this infringes on their privacy or reduces transparency. Pilot projects, therefore, which provide the opportunity to test the latest technologies, work through common concerns, and raise awareness among the public, are the best starting point for local councils looking to move forward with this potentially transformative technology.


Follow us on Twitter to discover which topics are interesting our research team.

Protecting privacy in the aftermath of the Facebook-Cambridge Analytica scandal

By Steven McGinty

On 4 June, Information Commissioner Elizabeth Denham told MEPs that she was ‘deeply concerned’ about the misuse of social media users’ data.

She was speaking at the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) inquiry into the use of 87 million Facebook profiles by Cambridge Analytica and its consequences for data protection and the wider democratic process. The whole affair has shone a light on how Facebook collected, shared, and used data to target people with political and commercial advertising. And, in a warning to social media giants, she announced:

Online platforms can no longer say that they are merely a platform for content; they must take responsibility for the provenance of the information that is provided to users.”

Although this is tough talk from the UK’s guardian of information rights – and many others, including politicians, have used similar language – the initial response from the Information Commissioner was hardly swift.

The Information Commissioners Office (ICO) struggled at the first hurdle, failing to secure a search warrant for Cambridge Analytica’s premises. Four days after the Elizabeth Denham announced her intention to raid the premises, she was eventually granted a warrant following a five-hour hearing at the Royal Courts of Justice. This delay – and concerns over the resources available to the ICO – led commentators to question whether the regulator has sufficient powers to tackle tech giants such as Facebook.

Unsurprisingly, it was not long before the Information Commissioner went into “intense discussion” with the government to increase the powers at her disposal. At a conference in London, she explained:

Of course, we need to respect the rights of companies, but we also need streamlined warrant processes with a lower threshold than we currently have in the law.”

Conservative MP, Damien Collins, Chair of the Digital, Culture, Media and Sport select committee, expressed similar sentiments, calling for new enforcement powers to be included in the Data Protection Bill via Twitter:

Eventually, after a year of debate, the Data Protection Act 2018 was passed on the 23 May. On the ICO blog, Elizabeth Denham welcomed the new law, highlighting that:

The legislation requires increased transparency and accountability from organisations, and stronger rules to protect against theft and loss of data with serious sanctions and fines for those that deliberately or negligently misuse data.”

By introducing this Act, the UK Government is attempting to address a number of issues. However, the Information Commissioner, will be particularly pleased that she’s received greater enforcement powers, including creating two new criminal offences: the ‘alteration etc of personal data to prevent disclosure‘ and the ‘re-identification of de-identified personal data’.

GDPR

On 25 May, the long awaited General Data Protection Regulation (GDPR) came into force. The Data Protection Act incorporates many of the provisions of GDPR, such as the ability to levy heavy fines on organisations (up to €20,000,000 or 4% of global turnover). The Act also derogates from EU law in areas such as national security and the processing of immigration-related data. The ICO recommend that GDPR and the Data Protection Act 2018 are read side by side.

However, not everyone is happy with GDPR and the new Data Protection Act. Tomaso Falchetta, head of advocacy and policy at Privacy International, has highlighted that although they welcome the additional powers given to the Information Commissioner, there are concerns over the:

wide exemptions that undermine the rights of individuals, particularly with a wide exemption for immigration purposes and on the ever-vague and all-encompassing national security grounds”.

In addition, Dominic Hallas, executive director of The Coalition for a Digital Economy (Coadec), has warned that we must avoid a hasty regulatory response to the Facebook-Cambridge Analytica scandal. He argues that although it’s tempting to hold social media companies liable for the content of users, there are risks in taking this action:

Pushing legal responsibility onto firms might look politically appealing, but the law will apply across the board. Facebook and other tech giants have the resources to accept the financial risks of outsized liability – startups don’t. The end result would entrench the positions of those same companies that politicians are aiming for and instead crush competitors with fewer resources.

Final thoughts

The Facebook-Cambridge Analytical scandal has brought privacy to the forefront of the public’s attention. And although the social media platform has experienced minor declining user engagement and the withdrawal of high profile individuals (such as inventor Elon Musk), its global presence and the convenience it offers to users suggests it’s going to be around for some time to come.

Therefore, the ICO and other regulators must work with politicians, tech companies, and citizens to have an honest debate on the limits of privacy in a world of social media. The GDPR and the Data Protection Act provide a good start in laying down the ground rules. However, in the ever-changing world of technology, it will be important that this discussion continues to find solutions to future challenges. Only then will we avoid walking into another global privacy scandal.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team. 

If you found this article interesting, you may also like to read our other digital articles.

How data leaks can bring down governments

Swedish Parliament building

By Steven McGinty

In July 2017, the Swedish Government faced a political crisis after admitting a huge data leak that affected almost all of its citizens.

The leak, which dates back to a 2015 outsourcing contract between the Swedish Transport Agency and IBM Sweden, occurred when IT contractors from Eastern Europe were allowed access to confidential data without proper security clearance. Media reports suggested that the exposed data included information about vehicles used by the armed forces and the police, as well as the identities of some security and military personnel.

The political fallout was huge for Sweden’s minority government. Infrastructure Minister Anna Johansson and Interior Minister Anders Ygeman both lost their positions, whilst the former head of the transport agency, Maria Ågren, was found to have been in breach of the country’s privacy and data protection laws when she waived the security clearance of foreign IT workers. In addition, the far-right Sweden Democrats were calling for an early election and Prime Minister Stefan Löfven faced a vote of no-confidence in parliament (although he easily survived).

However, it’s not just Sweden where data leaks have become political. Last year, the UK saw several high-profile incidents.

Government Digital Service (GDS)

The UK Government’s main data site incorrectly published the email addresses and “hashed passwords” of its users. There was no evidence that data had been misused, but the GDS recommended that users change their password as a precaution. And although users did not suffer any losses, it’s certainly embarrassing for the agency responsible for setting the UK’s digital agenda.

Scottish Government

Official documents revealed that Scottish Government agencies experienced “four significant data security incidents” in 2016-17. Three out of four of these cases breached data protection legislation.

Disclosure Scotland, a body which often deals with highly sensitive information through its work vetting individuals’, was one organisation that suffered a data leak. This involved a member of staff sending a mass email, in which email addresses could be viewed by all the recipients (a breach of the Data Protection Act).

Murdo Fraser, MSP for the Scottish Conservatives, criticised the data breaches, warning:

These mistakes are entirely the fault of the Scottish government and, worryingly, may signal security weaknesses that hackers may find enticing.”

Hacking parliaments

In the summer of 2017, the UK parliament suffered a ‘brute force’ attack, resulting in 90 email accounts with weak passwords being hacked and part of the parliamentary email system being taken offline. A few months later, the Scottish Parliament experienced a similar sustained attack on parliamentary email accounts. MPs have suggested Russia or North Korea could be to be blame for both attacks.

MPs sharing passwords

In December 2017, the Information Commissioner warned MPs over sharing passwords. This came after a number of Conservative MPs admitted they shared passwords with staff. Conservative MP Nadine Dorries explained:

My staff log onto my computer on my desk with my login every day. Including interns on exchange programmes.”

Their remarks were an attempt to defend the former First Secretary of State, Damian Green, over allegations he watched pornography in his parliamentary office.

Final thoughts

The Swedish data leak shows the political consequences of failing to protect data. The UK’s data leaks have not led to the same level of political scrutiny, but it’s important that UK politicians stay vigilant and ensure data protection is a key priority. Failure to protect citizen data may not only have financial consequences for citizens, but could also erode confidence in public institutions and threaten national security.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team. 

Drones in the city: should we ban drone hobbyists?

A young boy flying a drone

By Steven McGinty

Drones are becoming an increasingly observable feature of modern cities, from tech enthusiasts flying drones in local parks to engineers using them to monitor air pollution. And there have also been some high profile commercial trials such as Amazon Prime Air, an ambitious 30-minute delivery service.

However, introducing drones into the public realm has been something of a bumpy ride. Although the Civil Aviation Authority (CAA) produces guidance to ensure drones are flown safely and legally, there has been a number of hazardous incidents.

For example, in April, the first near-miss involving a passenger jet and more than one drone was recorded. The incident at Gatwick Airport saw two drones flying within 500m of an Airbus A320, with one pilot reporting a “significant risk of collision” had they been on a different approach path. In addition – and just 30 minutes later – one of these drones flew within 50m of another passenger jet, a Boeing 777.

Videos have also been uploaded to websites such as YouTube, which have clearly been taken from drones – a clear breach of the CAA’s rules prohibiting the flying of drones over or within 150m of built-up areas. This includes events such as the Cambridge Folk Festival, a match at Liverpool FC’s Anfield Stadium, and Nottingham’s Goose Fair. Jordan Brooks, who works for Upper Cut Productions – a company which specialises in using drones for aerial photography and filming – explains that:

They look like toys. For anyone buying one you feel like you’re flying a toy ‘copter when actually you’ve got a hazardous helicopter that can come down and injure somebody.

Privacy concerns have also started to emerge. Sally Annereau, data protection analyst at law firm Taylor Wessing, highlights a recent European case which held that a suspect’s rights had been infringed by a homeowner’s CCTV recording him whilst he was in a public place. Although not specifically about drones, Sally Annereau suggests this decision will have far reaching consequences, with potential implications for drone users recording in public and sharing their footage on social media sites. The Information Commissioner’s Office (ICO) has already issued guidance for drones.

The CAA report that there were more than 3,456 incidents involving drones in 2016. This is a significant increase on the 1,237 incidents in 2015.

The response

Cities have often taken contradictory approaches to drones. Bristol City Council has banned their use in the majority of its parks and open spaces. Similarly, several London boroughs have introduced ‘no drone zones’, although the London Borough of Richmond upon Thames has a relatively open policy, only banning drones over Richmond Park. Further, Lambeth Council requires hobbyists to complete an application form “to ensure suitability”, a standard similar to commercial drone pilots.

There have also been several accusations of double standards as large commercial operators such as Amazon receive exemptions to CAA rules, in front of photographers recording events, hospitals delivering blood, and researchers collecting data.

Although cities have a responsibility to protect the public, they also have to ensure citizens are able to exercise their rights. The air is a common space, and as such cities must ensure that hobbyists – as well as multinational firms – can enjoy the airspace. Thus, it might be interesting to see cities take a more positive approach and designate ‘drone zones’, where hobbyists can get together and fly their drones away from potential hazards.


The Knowledge Exchange provides information services to local authorities, public agencies, research consultancies and commercial organisations across the UK. Follow us on Twitter to see what developments in policy and practice are interesting our research team. 

Smart Chicago: how smart city initiatives are helping meet urban challenges

Outside a Chicago theatre, with a huge 'Chicago' sign outside

By Steven McGinty

Home to former President Barack Obama, sporting giants the Chicago Bulls, and the culinary delicacy deep dish pizza, Chicago is one of the most famous cities in the world. Less well known is Chicago’s ambition to become the most data-driven city in the world.

A late convert to the smart city agenda, Chicago was lagging behind local rivals New York and Boston, and international leaders Barcelona, Amsterdam, and Singapore.

But in 2011, Chicago’s new Mayor Rahm Emanuel outlined the important role technology needed to play, if the city was to address its main challenges.

Laying the groundwork – open data and tech plan

In 2012, Mayor Rahm Emanuel issued an executive order establishing the city’s open data policy. The order was designed to increase transparency and accountability in the city, and to empower citizens to participate in government, solve social problems, and promote economic growth. It required that every city agency would contribute data to it and established reporting requirements to ensure agencies were held accountable.

Chicago’s open data portal has nearly 600 datasets, which is more than double the number in 2011. The city works closely with civic hacker group Open Chicago, an organisation which runs hackathons (collaborations between developers and businesses using open data to find solutions to city problems).

In 2013, the City of Chicago Technology Plan was released. This brought together 28 of the city’s technology initiatives into one policy roadmap, setting them out within five broad strategic areas:

  • Establishing next-generation infrastructure
  • Creating smart communities
  • Ensuring efficient, effective, and open government
  • Working with innovators to develop solutions to city challenges
  • Encouraging Chicago’s technology sector

 Array of Things

The Array of Things is an ambitious programme to install 500 sensors throughout the city of Chicago. Described by the project team as a ‘fitness tracker for the city’, the sensors will collect real-time data on air quality, noise levels, temperature, light, pedestrian and vehicle traffic, and the water levels on streets and gutters. The data gathered will be made publicly available via the city’s website, and will provide a vital resource for the researchers, developers, policymakers, and citizens trying to address city challenges.

This new initiative is a major project for the city, but as Brenna Berman, Chicago’s chief information officer, explains:

If we’re successful, this data and the applications and tools that will grow out of it will be embedded in the lives of residents, and the way the city builds new services and policies

Potential applications for the city’s data could include providing citizens with information on the healthiest and unhealthiest walking times and routes through the city, as well as the areas likely to be impacted by urban flooding.

The project is led by the Urban Center for Computation and Data of the Computation Institute  a joint initiative of Argonne National Laboratory and the University of Chicago. However, a range of partners are involved in the project, including several universities, the City of Chicago who provide an important governance role and technology firms, such as Product Development Technologies, the company who built the ‘enclosures’ which protect the sensors from environmental conditions.

A series of community meetings was held to introduce the Array of Things concept to the community and to consult on the city’s governance and privacy policy. This engagement ranged from holding public meetings in community libraries to providing online forms, where citizens could provide feedback anonymously.

In addition, the Urban Center for Computation and Data and the School of the Art Institute of Chicago ran a workshop entitled the “Lane of Things”, which introduced high school students to sensor technology. The workshop is part of the Array of Things education programme, which aims to use sensor technology to teach students about subjects such as programming and data science. For eight weeks, the students were given the opportunity to design and build their own sensing devices and implement them in the school environment, collecting information such as dust levels from nearby construction and the dynamics of hallway traffic.

The Array of Things project is funded by a $3.1 million National Science Foundation grant and is expected to be complete by 2018.

Mapping Subterranean Chicago

The City of Chicago is working with local technology firm, City Digital, to produce a 3D map of the underground infrastructure, such as water pipes, fibre optic lines, and gas pipes. The project will involve engineering and utility workers taking digital pictures as they open up the streets and sidewalks of Chicago. These images will then be scanned into City Digital’s underground infrastructure mapping (UIM) platform, and key data points will be extracted from the image, such as width and height of pipes, with the data being layered on a digital map of Chicago.

According to Brenna Berman:

By improving the accuracy of underground infrastructure information, the platform will prevent inefficient and delayed construction projects, accidents, and interruptions of services to citizens.

Although still at the pilot stage, the technology has been used on one construction site and an updated version is expected to be used on a larger site in Chicago’s River North neighbourhood. Once proven, the city plans to charge local construction and utility firms to access the data, generating income whilst reducing the costs of construction and improving worker safety.

ShotSpotter

In January, Mayor Rahm Emanuel and Chicago Police Department commanders announced the expansion of ShotSpotter – a system which uses sensors to capture audio of gunfire and alert police officers to its exact location. The expansion will take place in the Englewood and Harrison neighbourhoods, two of the city’s highest crime areas, and should allow police officers to respond to incidents more rapidly.

Chicago Police Superintendent Eddie Johnson highlights that although crime and violence presents a complex problem for the city, the technology has resulted in Englewood going “eight straight days without a shooting incident”, the longest period in three years.

ShotSpotter will also be integrated into the city’s predictive analytics tools, which are used to assess how likely individuals are to become victims of gun crime, based on factors such as the number of times they have been arrested with individuals who have become gun crime victims.

Final thoughts

Since 2011, Chicago has been attempting to transform itself into a leading smart city. Although it’s difficult to compare Chicago with early adopters such as Barcelona, the city has clearly introduced a number of innovative projects and is making progress on their smart cities journey.

In particular, the ambitious Array of Things project will have many cities watching to see if understanding the dynamics of city life can help to solve urban challenges.


Follow us on Twitter to see what developments in public and social policy are interesting our research team.

If you found this article interesting, you may also like to read our other smart cities articles:

General Data Protection Regulation (GDPR): 10 things business needs to know

 

European Union flag with a padlock in the centre.

By Steven McGinty

On 25 May 2018, the data protection landscape will experience its biggest change in over 20 years.  This is because the European Union’s (EU) General Data Protection Regulation (GDPR) will come into effect for all member states. The regulation, which has been described as ‘ambitious’ and ‘wide-ranging’, introduces a number of new concepts, including the high profile ‘right to be forgotten’ – a principle established in a case involving technology giant Google.

Below we’ve highlighted ten of the most important points for business.

Directly effective

The GDPR is ‘directly effective’, which means that the regulation becomes law without the need for additional domestic legislation (replacing the Data Protection Act 1998). However, member states have also been given scope to introduce their own legislation on matters such as the processing of personal data. This may result in some EU states having more stringent rules than others.

Sharing data and monitoring

It also seeks to increase the reach of EU data protection law. Not only will EU-based data controllers and processors fall under the scope of the GDPR, but its authority will also extend to any business which either processes personal data or monitors the behaviour of individuals within the EU.

This will impact businesses who transfer data outside the European Economic Area (EEA). It will be their responsibility to ensure that the country the data is being transferred to has adequate levels of data protection. The most prominent example of this issue was the US Safe Harbour scheme, which was intended to protect European individuals whose personal data is transferred between the EEA and the USA. In 2015, the European Court of Justice ruled that this scheme had ceased to provide a valid legal basis for EEA-US transfers of all types of personal data. It has now been replaced by the Privacy Shield.

Transparency and consent

Greater obligations have been placed on business with regard both to seeking consent for use of personal data and providing detailed information to individuals on how their personal data is being used. The GDPR requires that consent notices are ‘unambiguous’ – not assumed from a person’s failure to respond – and that consent is sought for different processing activities. Law firm, Allen and Overy recommends that businesses review their notices to ensure they are fit for purpose.

Personal data/ sensitive data

Article 4(1) of the GDPR includes a broader definition of ‘personal data’ than previous legislation. It states that any information relating to an individual which can be directly or indirectly used to identify them is personal data. Specifically, it refers to ‘online identifiers’, which suggests that IP addresses and cookies may be considered personal data if they can be easily linked back to the person.

Enhanced rights

New rights and the enhancement of existing rights will require some businesses to improve the way their data is stored and managed. These rights include:

  • Data portability – Business must ensure that individuals can have easy access to their personal data in case they want to transfer their data to other systems.
  • Strengthening subject access rights – Individuals can now request access to their data for no cost and it must be responded to within 30 days (this is a change from the current legislation which requires a £10 fee and there is 40 days to respond).
  • Right to be forgotten – Individuals can request that an organisation delete all the information they hold on them (although this would not apply if there was a valid reason to hold that data).
  • Right to object to processing – Individuals have the right to object to the way an organisation is processing their data.
  • Right to restrict processing – Individuals have the right to request that the processing of personal data is temporarily stopped. This may be invoked whilst a right to object request is being investigated.

Personal data breach

Businesses have an obligation to report breaches to their national regulator, such as the Information Commissioners Office (ICO) in England.  The GDPR requires that notice must be provided “without undue delay and, where feasible, not later than 72 hours after having become aware of it.” This may be challenging for some businesses, particularly if the incident is discovered at the end of the working week.

Failure to comply with GDPR

The regulation introduces two levels of fines. Less serious offences under the regulation will be liable for a fine of up to €10,000,000 or 2% of global turnover – depending on which is highest. However, for more serious breaches, such as a breach of an individual’s rights or a breach during international transfers, a business may be held liable for up to €20,000,000 or 4% of global turnover.

In addition, individuals are also given the right of redress, and those who have had their rights violated may seek to receive compensation. This has led digital marketers to suggest that GDPR could be the next PPI – a practice where insurance was mis-sold to customers, which resulted in a large number of successful claims against financial institutions.

Privacy by design

Technology businesses should also consider data protection at the initial design stage of product development. This could involve adopting technical measures such as pseudonymisation – the technique of processing personal data in such a way that it can no longer identify a particular person. Additional measures, such as policies and programmes, would also show a national regulator’s commitment to compliance with the GDPR.

European Data Protection Board (EDPB)

A new body has been created to issue opinions and to arbitrate between disputes that arise with national regulators.  The board will be made up of heads of national regulatory bodies (or their representatives) and the European Data Protection Supervisor (EDPS), who govern the data processing activities of EU institutions. The opinions expressed by this board may have important implications for data protection legislation.

Impact of Brexit

Evidence suggests some businesses may be delaying taking action until they see the results of the Brexit negotiations. This possibly explains the research by cloud security firm, Netskope, which found that 63% of UK workers have never heard of the GDPR. Similarly, research by Veritas Technologies, a leading information management firm, has found that 54% of organisations have not ensured they will comply with the new GDPR.

However, it would be very surprising if the UK did not ‘mirror’ the protections offered by the regulation, particularly considering the UK’s significant input to the new legislation. Digital minister Matt Hancock has also confirmed that the UK government intends to fully implement the GDPR.

Final thoughts

If businesses already have policy and procedures in place to meet the requirements of the Data Protection Act, then they should have a solid foundation to comply with the GDPR. In many ways, the new regulation simply provides a clear framework for delivering good practice in data protection.

However, all businesses will need to take action to ensure compliance with the GDPR. Otherwise, the financial penalties (as well as reputational damage) of a breach could have serious consequences for their business. And this is not just an IT issue. The whole organisation, starting from board level, must show a willingness to understand the legislation and implement procedures that protect the fundamental rights of individuals.


Follow us on Twitter to see what developments in public and social policy are interesting our research team. If you found this article interesting, you may also like to read our other data-related articles

What’s happening to make big data use a reality in health and social care?

data-stream-shutterstock_croppedBy Steven McGinty

At the beginning of the year, NHS Director Tim Kelsey described the adoption of new technologies in the NHS as a ‘moral obligation’. He argued that the gaps in knowledge are so wide and so dangerous that they were putting lives at stake.  It’s therefore no surprise that the UK Government, the NHS, and local governments have all been looking at ways to better understand the health and social care environment.

The effective use of ‘big data’ techniques is said to be key to this understanding. Big data has many definitions but industry analysts Gartner define it as:

“high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making”

However, if health and social care is to make better use of its data, it’s important that an effective infrastructure is in place. As a result, changes have been made to legislation and a number of initiatives introduced.

Why is it important to know about big data in health and social care?

The effective use of data in health and social care is a key policy aim of the current government (and will most likely continue under future governments).  The changes that have been made so far have had a significant impact on the policies and practices of health and social care organisations. The vast majority focus on information sharing, in particular how organisations share data and who they share data with.

What changes have been made to support big data?

Care.data

This was the most ambitious programme introduced by NHS England. It was developed by the Social Care Information Centre (HSCIC) and set out to link the medical records of GP practices with hospitals at a national level. It was expected that datasets from GPs’ records and hospital records would be linked using an identifier such as an NHS number or a person’s date of birth. However, due to concerns raised by the public, particularly in regards to privacy, the programme was delayed. The programme has now resumed but new safeguards have been introduced, such as the commissioning of an advisory board and the ‘opt out’ provision, where patients can opt out from having their data used for anything other than their direct care.

The Health and Social Care Act 2012 and the Care Act 2014

The Acts have both introduced provisions that impact on data. For instance, the Health and Social Care Act enshrines in law the ability of the Health and Social Care Information Centre (HSCIC) to collect and process confidential personal data. In addition, the Care Act clarifies the position of the Health and Social Care Act by ensuring that the HSCIC doesn’t distribute data unless it’s part of the provision of health and social care or the promotion of health.

Centre of Excellence for Information Sharing

This initiative came from the ‘Improving Information Sharing and Management (IISaM) project’, a joint initiative between Bradford Metropolitan District Council, Leicestershire County Council and the 10 local authorities in Greater Manchester. The centre has been set up to help understand the barriers to information sharing and influence national policy. They hope to achieve these goals through the use of case studies, blogs, the development of toolkits, and any other forms of shared learning. The centre has already published some interesting case studies including the Hampshire Health Record (HHR) and Leicestershire County Council’s Children and Young People’s Service (CYPS) approach to communicating how they deal with data.

These are just some of the steps that have been taken to make sure 2015 is the year of big data. However, if real progress is to be made it’s going to require more than top down leadership and headline grabbing statements. It’s going to require all health and social care organisations to take responsibility and work through their barriers to information sharing.


Further reading

Read our other recent blogs on health and social care:

Become a member of the Idox Information Service now, to access a wealth of further information on health and social care including best practice and commentary. Contact us for more details.

Smart cities … treading the line between the possible, the probable and the desirable

By Morwen Johnson

Sometimes it feels like every city in the world is now claiming to be ‘smart’. Our research team regularly add new reports on the topic to our database. And with a policy agenda riding on the back of a multi-billion pound global industry, the positivist rhetoric around smart cities can seem overwhelming.

We’ve blogged before about the disconnect between what surveys suggest the public values in terms of quality of life in urban areas, and what smart cities are investing in. And last week I attended a conference in Glasgow ‘Designing smart cities: opportunities and regulatory challenges’ which refreshingly brought together a multi-disciplinary audience to look at smart cities in a more critical light.

The conference was rich and wide-ranging – too broad for me to try and summarise the discussions. Instead here are some reflections on the challenges which need to be explored.

Every smart city is a surveillance city

Look in any smart city prospectus or funding announcement and you’ll find mention of how data will be ‘managed’, ‘captured’, ‘monitored’, ‘shared’, ‘analysed’, ‘aggregated’, ‘interrogated’ etc. And this is inevitably presented as a benign activity happening for the common good, improving efficiency, saving money and making life better.

As David Murakami Wood pointed out at the conference however, this means that every smart city is by necessity a surveillance city – even if policymakers and stakeholders are reluctant to admit this.

Public debate is failing to keep up with the pace of change

Even for someone who takes a keen interest in urbanism and the built environment, any description of smart cities can risk leaving you feeling like a techno-illiterate dinosaur. It’s clear that there is also a huge amount of hype around the construction (or retrofitting) of smart cities – with vested interests keen to promote a positive message.

Do we really understand the possibilities being opened up when we embed technology in our urban infrastructure? And more importantly, what are the ethical questions raised around sharing and exploiting data? The pace of the development and rollout of new technologies within our urban environments seems to be running ahead of the desirable cycle of reflection and critique.

An interesting point was also made about language – and whether experts, technologists and policymakers need to adjust their use of language and jargon, in order for discussion about smart cities to be inclusive. Ubicomp … augmented reality … the Internet of Things … even the Cloud – how can the public give informed consent to participating in the smart city if the language used obscures and obfuscates what is happening with their data?

Where can we have a voice in the data city?

Following on from this point, cities are not ends in themselves – to be successful they must serve the interests and needs of the people who live, work and visit them. An interesting strand of the conference discussion considered what a bottom-up approach to smart cities would look like.

Alison Powell highlighted that there’s been a shift from seeing people as citizens to treating them as ‘citizen consumers’ – I’d add that within the built environment, this goes hand-in-hand with the commercialisation and privatisation of public space – and this has profound implications around questions of inclusion/exclusion. And also where power and decision-making sits – and who is profiting.

Although some general examples of community participation projects were mentioned during the conference, these didn’t seem to address the question of how ‘people’ can engage with smart cities. Not as problems to be managed or controlled – or as passive suppliers of data to sensors – but as creative and active participants.

Conclusion

I left the conference wondering where society is heading and how we, the Knowledge Exchange, can support our members in local government and the third sector to understand the extensive opportunities and implications of smart cities. We see a key part of our mission to be horizon scanning – and our briefings for members focus on drawing together analysis, emerging evidence and case studies.

Not all towns or cities have the resources, investment or desire to lead the way in technological innovation. But the challenge of bridging the gap between professionals and their vision and understanding of smart cities, and people in communities, is a universal one.

As William Gibson observed: “The future is already here … it’s just not very evenly distributed”.


 

The Idox Information Service can give you access to a wealth of further information on smart cities or public participation. To find out more on how to become a member, contact us.

Our reading list prepared for last autumn’s Annual UK-Ireland Planning Research Conference looks at some recent literature on smart cities.

The conference Designing smart cities: opportunities and regulatory challenges was held at the University of Strathclyde on 31 March and 1 April 2015, supported by CREATe and Horizon.

The Idox Group is the leading applications provider to UK local government for core functions relating to land, people and property, such as its market leading planning systems. Over 90% of UK local authorities are now customers. Idox provides public sector organisations with tools to manage information and knowledge, documents, content, business processes and workflow as well as connecting directly with the citizen via the web.