Smarter tourism: solving the data problem to boost tourism and create better cities

By Steven McGinty

On 22 March, I attended ‘Smarter Tourism: Shaping Glasgow’s Data Plan’, an event held as part of DataFest 2017, a week-long festival of data innovation with events hosted across Scotland.

Daniel MacIntyre, from Glasgow City Marketing Bureau (the city’s official marketing organisation), opened the event by highlighting Glasgow’s ambitious target of increasing visitor numbers from two million to three million by 2023.

To achieve this goal, Mr MacIntyre explained that the city would be looking to develop a city data plan, which would outline how the city should use data to solve its challenges and to provide a better experience for tourists.

In many ways, Glasgow’s tourism goal set the context for the presentations that followed, providing the attendees – who included professionals from the technology and tourism sectors, as well as academia and local government – with an understanding of the city’s data needs and how it could be used.

Identifying the problem

From very early on, there was a consensus in the room that tourism bodies have to identify their problems before seeking out data.

A key challenge for Glasgow, Mr MacIntyre explained, was a lack of real time data. Much of the data available to the city’s marketing bureau was historic (sometimes three years old), and gathered through passenger or visitor experience surveys. It was clear that Mr MacIntrye felt that this approach was rather limiting in the 21st century, highlighting that businesses, including restaurants, attractions, and transport providers were all collecting data, and if marketing authorities could work in collaboration and share this data, it could bring a number of benefits.

In essence, Mr MacIntyre saw Glasgow using data in two ways. Firstly, to provide a range of insights, which could support decision making in destination monitoring, development, and marketing. For instance, having data on refuse collection could help ensure timely collections and cleaner streets. A greater understanding of restaurant, bar, and event attendances could help develop Glasgow’s £5.4 million a year night time economy by producing more informed licensing policies. And the effectiveness of the city’s marketing could be improved by capturing insights from social media data, creating more targeted campaigns.

Secondly, data could be used to monitor or evaluate events. For example, the impact of sporting events such as Champions League matches – which increase visitor numbers to Glasgow and provide an economic boost to the city – could be far better understood.

Urban Big Data Centre (UBDC)

One potential solution to Glasgow City Marketing Bureau’s need for data may be organisations such as the Urban Big Data Centre.

Keith Dingwall, Senior Business Manager for the UBDC, explained that the centre supports researchers, policymakers, businesses, third sector organisations, and citizens by providing access to a wide variety of urban data. Example datasets include: housing; health and social care data; transport data; geospatial data; and physical data.

The UBCD is also involved in a number of projects, including the integrated Multimedia City Data (iMCD) project. One interesting aspect of this work involved the extraction of Glasgow-related data streams from multiple online sources, particularly Twitter. The data covers a one year period (1 Dec 2015 – 30 Nov 2015) and could provide insights into the behaviour of citizens or their reaction to particular events; all of which, could be potentially useful for tourism bodies.

Predictive analytics

Predictive analytics, i.e. the combination of data and statistical techniques to make predictions about future events, was a major theme of the day.

Faical Allou, Business Development Manager at Skyscanner, and Dr John Wilson, Senior Lecturer at the University of Strathclyde, presented their Predictive Analytics for Tourism project, which attempted to predict future hotel occupancy rates for Glasgow using travel data from Glasgow and Edinburgh airport.

Glasgow City Marketing Bureau also collaborated on the project – which is not too surprising as there a number of useful applications for travel data, including helping businesses respond better to changing events, understanding the travel patterns of visitors to Glasgow, and recommending personalised products and services that enhance the traveller’s experience (increasing visitor spending in the city).

However, Dr Wilson advised caution, explaining that although patterns could be identified from the data (including spikes in occupancy rates), there were limitations due to the low number of datasets available. In addition, one delegate, highlighted a ‘data gap’, suggesting that the data didn’t cover travellers who flew into Glasgow or Edinburgh but then made onward journeys to other cities.

Uber

Technology-enabled transport company, Uber, has been very successful at using data to provide a more customer oriented service. Although much of Uber’s growth has come from its core app – which allows users to hire a taxi service – they are also introducing innovative new services and integrating their app into platforms such as Google Maps, making it easier for customers to request taxi services.

And in some locations, whilst Uber users are travelling, they will receive local maps, as well as information on nearby eateries through their UberEATS app.

Uber Movement, an initiative which provides access to the anonymised data of over two billion urban trips, has the potential to improve urban planning in cities. It includes data which helps tourism officials, city planners, policymakers and citizens understand the impact of rush hours, events, and road closures in their city.

Chris Yiu, General Manager at Uber, highlighted that people lose weeks of their lives waiting in traffic jams. He suggested that the future of urban travel will involve a combination of good public transport services and car sharing services, such as uberPOOL (an app which allows the user to find local people who are going in their direction), providing the first and last mile of journeys.

Final thoughts

The event was a great opportunity to find out about the data challenges for tourism bodies, as well as initiatives that could potentially provide solutions.

Although a number of interesting issues were raised throughout the day, two key points kept coming to the forefront. These were:

  1. The need to clarify problems and outcomes – Many felt it was important that cities identified the challenges they were looking to address. This could be looked at in many ways, from addressing the need for more real-time data, to a more outcome-based approach, such as the need to achieve a 20% reduction in traffic congestion.
  2. Industry collaboration – Much of a city’s valuable data is held by private sector organisations. It’s therefore important that cities (and their tourism bodies) encourage collaboration for the mutual benefit of all partners involved. Achieving a proposition that provides value to industry will be key to achieving smarter tourism for cities.

Follow us on Twitter to see what developments in public and social policy are interesting our research team. If you enjoyed this article, you may also be interested in: 

Rise of the Datavores … showing no fear of data, it takes skills

Datavores infographicPrevious work by NESTA highlighted companies with apparently no fear of data. They called them ‘datavores’. When making decisions about how to grow their sales, they rely on data and analysis over experience and intuition.

Does being data active have an impact?

According to a new NESTA report published this week Skills of the datavores: talent and the data revolution, those organisations which are more ‘data-active’ perform better than those that are not, as the infographic above illustrates:

  • Datavores are 10% more productive
  • But, only 18% of companies are datavores
  • If all “dataphobes” became “datavores” it would add a 3% uplift in productivity
  • Data-driven firms are 40% more likely to launch new products and services.

What does a skilled data workforce look like?

The research suggests that the biggest issue facing the industry is the lack of skilled data analysts/scientists, where demand has grown 41%. Businesses are using a combination of actions to solve this lack of supply of skilled people, including off-shoring the roles, recruiting best fit and using a combination of inhouse, on the job and external training to grow their own.

Many organisations are also developing inter-disciplinary teams to create a data literate workforce because the skills needed within a data scientist are so rare; as the report says, as rare as “unicorns”. Our own experience of recruiting a data scientist would support this.

The workforce which is emerging is one focussed on adaption and flexibility, based on data sciences across the board, such as qualitative researchers, mathematicians, statisticians, developers and business analysts. Within this mix of skills, the new workforce also needs to have a creative flair and business knowledge that enables them to use the data in the organisation’s best interest and to add value.

What does it mean for skills suppliers?

As an emerging profession, it is difficult to pin down the exact skills an employer needs which in turn makes it difficult for schools, colleges and universities to supply the right type of education. The accompanying policy briefing from NESTA and Universities UK, Analytic Britain: securing the right skills for the data-driven economy, makes a number of recommendations, highlighted in the infographic above, many of which focus on the skills suppliers.

Universities are both a supplier and user of these skills and have a unique opportunity to really enage with the market. The focus on metrics in both the proposed Teaching Excellence Framework and Research Excellence Framework means that universities themselves are in need of the same skills and have an opportunity to supply based on experience.

For universities these recommendations have a number of impacts, and data issues are increasingly at the forefront of policy thinking. Universities UK has reviewed how data analytics are taught across disciplines and reflects on the shortage of academic staff who are confident in teaching data analytics in this way and the varying skills of students entering higher education.

The pervasive nature of the data revolution explains why a variety of disciplines and skills are being brought together. No one can argue against the need for more and better data to improve policy making and business planning. Plenty of data is now being captured but not used, and in the words of John Lennon “you say you want a revolution” and “we all want to change the world” … data is changing our world significantly but are you equipped for it?


The Idox Information Service can help you access further information on the use of data science, and the skills needed. To find out more on how to become a member, contact us.

Download the Datavores Infographic.

Further reading on the topics covered in this blog and infographic*:

Skills of the datavores: talent and the data revolution

Are you a Datavore? Insights on the use of online customer data in decision-making

UK data capability strategy: seizing the data opportunity

Information economy strategy

Inside the Datavores: how data and online analytics affect business performance

Employer insights: skills survey 2015

Big data analytics: assessment of demand for labour and skills 2013–2020

UK corporate perspectives: new technologies – where next?

*Some resources may only be available to members of the Idox Information Service

What’s happening to make big data use a reality in health and social care?

data-stream-shutterstock_croppedBy Steven McGinty

At the beginning of the year, NHS Director Tim Kelsey described the adoption of new technologies in the NHS as a ‘moral obligation’. He argued that the gaps in knowledge are so wide and so dangerous that they were putting lives at stake.  It’s therefore no surprise that the UK Government, the NHS, and local governments have all been looking at ways to better understand the health and social care environment.

The effective use of ‘big data’ techniques is said to be key to this understanding. Big data has many definitions but industry analysts Gartner define it as:

“high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making”

However, if health and social care is to make better use of its data, it’s important that an effective infrastructure is in place. As a result, changes have been made to legislation and a number of initiatives introduced.

Why is it important to know about big data in health and social care?

The effective use of data in health and social care is a key policy aim of the current government (and will most likely continue under future governments).  The changes that have been made so far have had a significant impact on the policies and practices of health and social care organisations. The vast majority focus on information sharing, in particular how organisations share data and who they share data with.

What changes have been made to support big data?

Care.data

This was the most ambitious programme introduced by NHS England. It was developed by the Social Care Information Centre (HSCIC) and set out to link the medical records of GP practices with hospitals at a national level. It was expected that datasets from GPs’ records and hospital records would be linked using an identifier such as an NHS number or a person’s date of birth. However, due to concerns raised by the public, particularly in regards to privacy, the programme was delayed. The programme has now resumed but new safeguards have been introduced, such as the commissioning of an advisory board and the ‘opt out’ provision, where patients can opt out from having their data used for anything other than their direct care.

The Health and Social Care Act 2012 and the Care Act 2014

The Acts have both introduced provisions that impact on data. For instance, the Health and Social Care Act enshrines in law the ability of the Health and Social Care Information Centre (HSCIC) to collect and process confidential personal data. In addition, the Care Act clarifies the position of the Health and Social Care Act by ensuring that the HSCIC doesn’t distribute data unless it’s part of the provision of health and social care or the promotion of health.

Centre of Excellence for Information Sharing

This initiative came from the ‘Improving Information Sharing and Management (IISaM) project’, a joint initiative between Bradford Metropolitan District Council, Leicestershire County Council and the 10 local authorities in Greater Manchester. The centre has been set up to help understand the barriers to information sharing and influence national policy. They hope to achieve these goals through the use of case studies, blogs, the development of toolkits, and any other forms of shared learning. The centre has already published some interesting case studies including the Hampshire Health Record (HHR) and Leicestershire County Council’s Children and Young People’s Service (CYPS) approach to communicating how they deal with data.

These are just some of the steps that have been taken to make sure 2015 is the year of big data. However, if real progress is to be made it’s going to require more than top down leadership and headline grabbing statements. It’s going to require all health and social care organisations to take responsibility and work through their barriers to information sharing.


Further reading

Read our other recent blogs on health and social care:

Become a member of the Idox Information Service now, to access a wealth of further information on health and social care including best practice and commentary. Contact us for more details.

New Idox research – Data Mining to inform public policy

By Susan Lomax, Data Scientist, Knowledge Transfer Partnership placement

The latest “new” thing in the world of data mining is using “Big Data” to inform public policy. Using data mining methods, we can aid evidence-based decision making by learning what the data can tell us and using this to write or implement policy. Idox are now exploring these methods to look at opportunities for our public policy and research members.

Investigation indicates that using data in this way is in its infancy, where data mining methods are in the process of being used, but so far, very little is completed. Published examples include, London Borough of Newham’s property data, which has been combined with numerous other datasets and mined to examine change in property tenure in order to support, amongst other things, their housing management services. The University College London mined Oyster Card data in order to minimize cost for travellers using public transport and to encourage public transport use. The first stage of the research will be exploring what can be done and what would be useful to members.

As a new member of the Idox staff, I am on a scheme known as Knowledge Transfer Partnership (KTP), which helps companies engage in this type of research and development. The scheme is celebrating its 40th Anniversary this year, having first been formed in 1975 as the Teaching Company Scheme. The KTP program is funded by 17 public sector organisations and led by Innovate UK, formally the Technology Strategy Board. The aim is to support UK businesses wanting to improve their competitiveness, productivity and performance by accessing the knowledge and expertise available within UK Universities and Colleges.

Traditionally taking place in engineering and manufacturing industries, they have now branched out into ICT, looking at data analysis, and creative industries such as design, fashion, music and video games businesses. There are currently 800 partnerships across the UK.

Our research partnership includes an academic institution and The University of Salford, is on hand to provide support and guidance. It has an outstanding record with regard to innovation, enterprise and skills. The Informatics Research Centre builds on history, success and achievements of research in Computer Science and Information Systems over the last 30 years.

Data mining is a process to discover patterns in large datasets. Its roots are in disciplines such as artificial intelligence, machine learning, statistics and database systems. Its overall goal is to extract information from data and make this understandable, so that it can be used to make decisions. A popular book “Data mining: Practical machine learning tools and techniques with Java” has information about the most common data mining methods.

The three main data mining methods we will be trying are association rules, classification and clustering and we will be exploring these in the research.

  • Association rule learning searches for relationships between variables (or attributes) in the dataset. A most popular example is a supermarket finding out which products their customers buy together and use this information for marketing purposes. This is also known as market basket analysis.
  • Classification is when a dataset has examples grouped into known classes; the task is to assign a new example to one of these known classes. A well-known algorithm performing this task is the Decision Tree algorithm C4.5.
  • Clustering performs a similar task to classification but with clustering we don’t have an assigned ‘class’. A technique known as k-nearest Neighbour is a popular method. Other main tasks are regression, summarization and anomaly detection.

Although the research is explorative at the moment, I hope to keep you updated with our progress throughout the project. If you have any thoughts or want to find out more, please get in touch.


The Idox Information Service can give you access to a wealth of further information on data and knowledge management. To find out more on how to become a member, contact us.

Further recent reading*

Classification

Association rule

Measuring transit use variability with smart-card data

Digital councils

*Some resources may only be available to members of the Idox Information Service

How can the government unlock the potential of big data?

By Steven McGinty

Last May, the Open Rights Group announced that they were in discussions with the UK Government over their proposals to remove the barriers to data sharing and link up government databases. This would mean that thousands of government databases, containing information such as criminal records and even energy use, could be accessed by local councils, schools, the civil service and the police. It’s hoped that the sharing of data will allow the government to capitalise on big data techniques and provide better and more tailored public services.

However, several issues have been identified that may make widespread government data sharing challenging. These include:

  • a lack of prioritisation by local council and government leaders;
  • concerns over protecting the privacy of citizens;
  • a mistrust of government data handling;
  • the use of different systems and different standards by government bodies.

The Information Commissioner’s Office (ICO) reports that from April 2013 to March 2014 there were just over 1500 breaches of the Data Protection Act. Local authorities accounted for 234 of these breaches, coming second only to health organisations, who committed 551 breaches. In the last quarter of the year, the most common offences were disclosing personal information in error (175 incidents) and lost or stolen paperwork (74 incidents).

The ICO has also handed out several high profile fines to organisations in the public sector. For example, North East Lincolnshire Council was fined £80,000 for losing an unencrypted USB stick which held the personal and sensitive data of children. Similarly, Aberdeen City Council were fined £100,000 after a member of their staff accidently uploaded documents onto the internet, including personal information about social care cases.

The Improvement and Development Agency (I&DeA) released a report in 2010 on the role of data sharing in tackling worklessness. The report findings, still relevant today, highlighted the importance of developing data sharing systems that:

  • build in the need for data sharing into the design;
  • adopt clear and consistent definitions;
  • respect the privacy of individuals;
  • ensure data integrity.

Further, the report explained how anonymised personal data can be used to share data legally. For example, anonymised data (data which has had its identifiable information removed), has been used increasingly to provide local analysis across a number of areas, including health, crime and employment. Some examples include Eastleigh Ambition, which uses data to target and support vulnerable families, and Newham Council, who use a range of data, including Disability Living Allowance information to improve their understanding of changing populations and needs.

Working in partnership and using technological innovations has also provided solutions for data sharing issues. For instance, the Tyne and Wear City Strategy Partnership was established to purchase a shared customer tracking system to facilitate data sharing. The system has been rolled out in a variety of ways across the North East of England, with partners helping to make the system more user friendly. The system has been designed to ensure that consent is built in whenever data is shared. Users also have different levels of access depending on their organisation and on what they ‘need to know’, to ensure compliance with the Data Protection Act.

Although there have been some high profile cases of government data mishandling, it’s clear that data sharing will continue to increase, particularly as all levels of government look for more targeted services. Government and society will have to come to an agreement on how this should be done.


 

Further reading: