SafeClouds Mid-term Review

 

On 11th of April, we had a successful mid-term review for our H2020 project, Safeclouds. The meeting was hosted by Eurocontrol in Brussels, with participants from all entities involved in the project.

Read Eurocontrol's post on the mid-term review here!

Infrastructure needed for Aviation Data Analytics

Author: Jens Krueger

Safety is key in aviation. To reach maximum safety, stakeholders are collecting a large amount of data for analytics. Ultimately, researchers want to not only evaluate the causal dependencies of safety critical events, but to also enhance operational efficiency.

Presently, such data is stored in isolated data silos. The goal of SafeClouds.eu is twofold: advance data-driven analytics for safety and efficiency and manipulate data outside of the silos to enable data sharing and merging between different stakeholders, including data owners. However, the infrastructure must ensure that personal or confidential data is not leaked to third parties; all while maintaining data sharing capabilities.
In order to address the requirements for data protection and analysis, the SafeClouds.eu infrastructure must enable the following data analysis paradigms:

  • Fusion of identified confidential data streams into a single de-identified data stream. Identified data is data that contains information that could be used to directly or indirectly (e.g. via linking attacks) expose personal data linked to a specific group of people or individuals.
  • Access to the de-identified data streams for SafeClouds.eu data analysis.
  • Information sharing of the analysis of restricted and confidential data from aviation stakeholders (airlines, ANSPs) for blind benchmarking.
  • Access governance should be in place, such specifics on data access (i.e. should be continuously monitored) and limitations.

The infrastructure architecture must reflect data protection requirements in order to guarantee the different data confidentiality levels. The physically-independent components are as follows:

Local system:
The local system sits at the premises of the participating companies (e.g. airlines and ANSPs) and stores raw datasets from different source systems. The data leverages other sources to comprise a 360-scenario dataset with enhanced informational context and processing. The global cloud system should provide such datasets. Finally, the dataset is de-identified and made accessible. Authorised third parties are allowed access only for data management and administrative tasks.

Dedicated private cloud:
Each participating party will be provided with a private segment of the cloud infrastructure that is logically and physically independent. It is used for de-identified data storage and analytics. Data scientists from SafeClouds.eu official partners will have access to the de-identified data under the data protection agreements.

Global cloud system:
The global cloud system is divided into two parts. The global storage will hold all open datasets (Meteo, ADS-B, SWIM, Radar). It will also ensure dataset quality and accessibility through pre-processing. In addition, it will grant access from the local systems and the dedicated private cloud. Note that the global processing infrastructure performs analytics on joint datasets from all dedicated private clouds. 

Figure 1: Hierarchical architecture of the SafeClouds.eu infrastructure

The SafeClouds.eu Cloud Infrastructure

The SafeClouds.eu cloud infrastructure is built on Amazon Web Services (AWS). One of the main advantages of AWS is that it consists of several datacenters located around the world. This enables SafeClouds.eu to reduce communication latencies by choosing the most appropriate datacenter locations. For example, each AWS datacenter is located within a region. Then, each region has several datacenters, or Availability Zones. Each Availability Zone is attached to a different part of the power grid, to mitigate a case of potential power outage damanage. Any distributed cloud application running in AWS must consider the tradeoff between fault-tolerance by placing nodes in different Availability Zones with keeping computational resources as close together as possible to enhance performance.  

For SafeClouds.eu, AWS enables the infrastructure to horizontally scale with an increasing number of stakeholders or increased processing or storage requirements.

To ensure security AWS Identitiy and Access Management (IAM) as well as virtual private clouds (VPC) and encryption for data in motion and at rest is used.

Remarks

The SafeClouds.eu infrastructure enables data protection, data sharing and flexibility. Data safety and security is key to gain trust from data providers; without it the overall project is at risk for success. This blog post stresses the importance of a distributed and secure infrastructure and gives a first look into how the overall infrastructure architecture is designed. However, alhough the base infrastructure technology supports scalability, security, and other factors, the most important challenge is to leverage and implement those technological capabilities. One of the main security threads is human failure, bugs, and wrong implementations. To account for user error, the infrastructure must be as automated as possible along with clearly defined and deterministic processes. In addition, each entry point must be defined and encapsulated while keeping accessibility and usability. SafeClouds.edu will be using this precise infrastructure for aviation data analytics, and will share those findings with the aviation and data science communities. 

Discovering hidden knowledge in aviation data

Author: Paula Lopez (INX)

Machine learning is producing outstanding results although we know it is still far from emulating human intelligence. Applying machine learning techniques, including multi-level artificial neural networks (deep learning) to, for example, speech or image recognition has been continuously resulting in improved results (e.g. digital assistants like Apple´s Siri or Amazon´s Echo). In spite of the significant progress achieved so far, there are still some challenges that need to be resolved in order to be applicable in most industries. On one hand, we face a fragmented ecosystem, meaning that there is a gap between the data scientists and the domain experts working in each particular sector. In order to be able to convert data into knowledge, collaboration among both expertises is required. On the other hand, challenges related to data management and data analysis need to be addressed prior to implementing machine learning techniques in most industries. These challenges, just to name a few, include heterogeneous and distributed data sources, data validation, distributed data architectures, data security, scalability, real-time analysis and decision-support or data visualization.

However, we cannot fall into the error of assuming that a machine learning problem can be addressed through a generic standard application of a set of algorithms and techniques. Machine learning problems are highly case-dependent and, therefore, the purpose of the analysis needs to be carefully defined in advance. This is what we (at Innaxis) call Purposeful Knowledge Discovery which also was the title of the keynote speech made by Innaxis President Carlos Alvarez Pereira at the SESAR Innovation Days 2017 in Belgrade. And this is, precisely, the approach we follow at Innaxis in our data science research projects, like SafeClouds.eu: an H2020 project aimed at enhancing aviation safety through the application of data science techniques.

SafeClouds.eu includes a team of 16 partners including data scientists and engineers from several research entities (Innaxis, Tadorea, Fraunhofer, TU Munich, Linköping University, TU Delft and CRIDA) and a group of airlines, ANSPs and safety authorities (Iberia, Air Europa, Vueling, Norwegian, Pegasus, LFV, Eurocontrol, AESA and EASA). This group of airspace stakeholders is the user group of the project, in other words, those defining the questions for which they need data for gaining answers. These questions can be of three types: descriptive (what happened?), predictive (what will happen?) or prescriptive (what to do for what we want to happen). Once the questions are defined (SafeClouds.eu use cases) the team of data scientists and engineers work together and collaborate with users covering the full cycle of data science techniques: data management, data processing architecture, deep analytics, data protection, pseudo- anonymization, advanced visualization and user experience. As previously mentioned, every step has its own challenges as there are no data science standard tools to be transferred automatically from one field to another. Below, we outline just two challenges: fusion of proprietary confidential data and benchmarking among these competing stakeholders.

  • Smart Data Fusion: Simply erasing the flight-identifier parameters would protect the data but not allow fusion of datasets. Many data require protection and cannot be shared (e.g. FDM data and radar tracks), so fusion needs sophisticated techniques coming from cryptography and enabling coding sensitive data in a non-reversible way.
  • Secure Blind Benchmarking: Benchmarking among stakeholders based on data that cannot be shared also requires the application of specific techniques. This includes secure multiparty computation enabling comparison between confidential data without disclosing the data, not even to a trusted third party.

These are just some examples of the challenges the SafeClouds.eu team is facing in the field of aviation safety data analysis. The solutions offered by these techniques make them ideal to be applied to other fields such as fuel consumption but, again, the purpose of the analysis will determine the following necessary steps.

 

Mobility metrics and indicators rethought

Dataset-post

Performance is about comparing some output of a system with some level of expectations. The issue of setting the right level of expectations is certainly a major issue by itself, but choosing the right metrics to measure is probably even more difficult.

This difficulty comes from the fact that Key Performance Areas (KPAs) live in a different world than Key Performance Indicators (KPIs). KPAs live in a qualitative world, where general ideas are thought to be important for human beings. For instance, ‘safety’. KPIs on the other hand belong to a quantitative world of ‘cold values’ — floats, integers — observed on the real world. Matching these two worlds is like getting into Mordor: first you think that it will be obvious, then you think that it will be impossible, and you finally pick a way because it is pretty much the only one available.

Indeed, the potential KPIs that one could imagine are fortunately severely restrained by reality and what we can observe in the system. For instance, in DATASET2050 we were trying to define an indicator for the ‘seamlessness’ of a trip, something which is important for all travelers without a doubt. Important, ok, but what is it exactly?

Seamlessness is about the perception of travellers. As a consequence, it is highly subjective, which by definition cannot be part of an indicator, because an indicator is meant to be objective. So instead of a top-down approach where we use the question ‘What would be the best metrics to measure in order to represent seamlessness?’, we are left with a bottom-up approach consisting in ‘Among the ones I can measure, what are the metrics which would be related somehow to seamlessness?’.

So, what can we measure? For many years now, sociologists and psychologists use the ‘cognitive load’ to have a measure of the effort needed by a brain to accomplish a given task. Seamlessness is about being able to forget the trip itself and not actively be forced to take decisions or looking for information for the continuation of the journey. We thus defined a first indicator, which is the total cognitive load of a given trip for the passenger as a measure of seamlessness. Ok, but how do you measure cognitive load in reality?

Well, you don’t, as least not on a large scale. And here comes the second step of the search for a good indicator: can we find something easily measurable which is an approximation for what would be a perfect indicator?

In the case of seamlessness, we have to go back to how the travel unfolds. For instance, what is the difference between:

1) depart from home, take a taxi, take a train, take a taxi, arrive at destination.vs:

2) depart from home, take a taxi, take a train, take another train, take a taxi, arrive at destination.

Easy: there is one train more. Ok, but what makes you choose the first option over the second if both have the same travel time, price, etc.? Well, the first is easier, right? You do not have to think about getting off the train, find the next one, wait, get in train, possibly struggling to find a spot to seat, etc. So the idea that the first one is easier than the second one comes ultimately from the ‘continuation’ property of the actions you are taking, which is associated with a low cognitive load dedicated to the journey. In other words, taking different actions during a trip is more annoying that taking only one action.

Following this idea, DATASET2050 defined the journey as a series of ‘phases’ and ‘transitions’. ‘Phases’ are typically long with a low cognitive load dedicated to the journey, whereas ‘transitions’ are short and require the active participation of the passenger in order to continue the journey. A simple indicator can then be defined as the number of transitions taken in a single journey, which is trivial to compute for nearly any journey, with very little data input.

A slightly more advanced indicator is to consider the time spent within the transitions — for instance, queuing times — compared to the total travel time. For instance, a small 45 minutes trip where one has to take three buses is quite tiring compared to a single-bus journey. This indicator requires more data, as the specific times in each of the segments are required. However, it is largely feasible to compute it with modern methods of data collection (e.g. GPS tracking). Giving a good balance between the measuralibity and its concetpual proximity with the initial KPA, this indicator is the one which has been selected as key performance indicator for seamlessness in DATASET2050.

In DATASET2050, we have gone through the exercise of finding the right indicator for all of the KPAs defined by ICAO, including safety, flexilibity, efficiency, etc. These concepts are sometimes too vast and need to be broken down into sub-KPAs, called “Mobility Focus Areas”. For all of them, several indicators have been defined, but we selected only one final KPI in the end per KPA. For instance, the KPA “flexibility” has been subdivided into “diversity of destinations”, “multimodality”, and “resilience”. Only on key indicator has been selected in the end, weighting the travel options by the distance between the potential destinations. All this work can be found in the public deliverable 5.1 of DATASET2050, soon available here

To conclude, the choice of a good indicator is thus dictated by the balance between the measurability of the metrics and its relationship with the overall concept. This is an important issue, as the indicators are then used by the policy makers to drive the system is a certain direction. And the quality of the indicator decides whether it is the right one or not.

Author: Gérald Gurtner (University of Westminster) as part of DATASET2050 post series

Workshop: Digital for Sustainability – In Need of a Disruptive Research Agenda

"Digital Transformation" is the buzz phrase of the day. Since the 1980s an explosive growth has happened in Information and Communication Technologies (ICT), and its become pervasive, bringing a perception of tremendous acceleration in technological innovation. There are also high expectations for the role of ICT in sustainable development. Concepts such as disruption, dematerialization and zero marginal costs contribute to the (up to now) false belief that becoming increasingly digital will lead to low resource consumption. However, research shows that the ICT sector itself is not environmentally friendly; it is the fastest growing contributor to emissions, it consumes large amounts of energy, water and critical resources, and produces equally vast amounts of harmful waste with minimal recycling.

To address the generic claim of ICT as contributing to a better and “green” world, there should be mutual recognition and cooperation between digital tech and sustainable development, especially to understand the significant effort needed to harness the power of ICT for human advancement. Digital technologies and sustainability have rarely been analysed together in a rigorous manner. The scientific literature about the nexus of these topics is, up to now worryingly thin, and in many aspects not yet addressing the right questions, much less the responses.

This issue demands a rigorous inquiry of issues at stake and the foundation of a research agenda that builds strong synergies aimed to act beyond current hyped assumptions.

Considering this, Innaxis would like to invite you to the “Digital for Sustainability – In Need of a Disruptive Research Agenda” workshop. This event will be organised during the World Resources Forum on Tuesday 24th October 2017 in Geneva.

The goal of this workshop is to ignite a community of interested parties, who work on interdisciplinary research and action agendas, and to enable the alignment of digital technologies with the goals of sustainable development.

Speakers: Carlos Alvarez Pereira, Ladeja Godina Košir 
Workshop Organisers: Innaxis Research Institute and Texelia AG
Workshop Co-Organiser: Circular Change
Workshop Chairs: Soumaya El Kadiri (Texelia AG) and Joséphine von Mitschke-Collande (Innaxis Research Institute)

Date and time: Tuesday 24th October 2017, 16h30 – 18h30

Venue:
Centre International de Conferences (CICG)
Rue de Varembé 17
1211 Genève - Switzerland

 

Vista tactical model – Mercury: because passengers matter

Over the next decades, EU mobility is expected to progressively evolve from the gate-to-gate focus currently prevalent in the aviation and ATM industry towards a seamless and efficient door-to-door-orientated vision.  The paradigm shift from gate-to-gate (hence aircraft centered) to door-to-door (passenger-oriented) is present at virtually all strategic research documents and agendas. The paradigm shift is here to stay. From a passenger perspective, which of the following scenarios create more impact?:

  • Scenario A): a 8 minute delay in an aircraft arrival time with no connecting passengers
  • Scenario B): a 5 minute one that prevents a significant number of passengers doing a connection in that airport and subsequently expand their door-to-door trip in more than 10 hours

How can that impact be predicted in terms of time and cost? One of the very first research exercises was the POEM project (SESAR 1- WPE) etc. This project was the original seed of Mercury. Mercury has been afterwards improved, validated and completed in other reseach initiatives for SESAR and European Commission, reaching its current door-to-door status.

What is mercury?

Mercury is a modelling and simulator tool - a framework capable of measuring the performance of the air transport network. It provides a wide range of performance and mobility metrics, capable of describing in detail different air transport scenarios.

Mercury draws on extensive data, drawn from a wide range of industry sources, including airlines, airports and air navigation service providers. Mercury's data models have been demonstrated through over 5 years of research and development, plus industry consultation.

How passenger matter in mercury?

Mercury is the first air transportation network simulator that puts passengers in the centre. Each day of simulation the itineraries of more than 3 million passengers are reproduced. Each passenger has its individual profile, ticket and decisions to make. According to EU regulation 261/2004 passengers are compensated by delay and cancellations. Extended delays, aborted journeys, overnight stays there are all part of the Mercury simulator.

Of course airlines play a major role as well, Mercury incorporates costs models for canonical airline categories. Each of the airline decision of waiting for certain passengers, cancel a flight or even board the passengers and send a ready message even when a ATFCM slot was assigned is taken according to each airline rational cost model.

The secret ingedient: a spice of randomness

There is no way one could develop a simulator like Mercury taking into account every detail in the air transportation system. Some process are just too complex or simply put we do not understand yet. Whilst others are just exogenous factors far beyond the reach of the air transportation system. 

But what if we could use a different approach. In Mercury each day of operations is repeated, introducing small variations representing everyday uncertainty and exogenous factors.

Ultimately, small changes lead to completely different day of operations, delays and cancellations. Just similarly to what happens with some chaotic systems, the sensitivity to the initial conditions allow to explore overall trends and stable status, in some cases called emergence.

Interested in reading further info about Mercury? Click here to visit the website.

Author: Samuel Cristóbal (Innaxis)

Entry level/Junior Data Scientist or Data Engineer

Innaxis is currently seeking for Data Scientists and Engineers to join its research and development team based in Madrid, Spain. Talented and highly motivated individuals who want to pursue and lead a career outside of the more mainstream, conventional alternatives. Individuals with a great dose of imagination, problem solving skills, flexibility and passion are encouraged to apply.

  • As a Data Engineer, you will help the team to design and integrate complete solutions for Big Data architectures; from data acquisition and ETL processes until storage and delivery for analysis, using the latest technologies and solutions for the ultimate performance.
  • As a Data Scientist, you will mainly assist the team to understand, analyse and mine data, but also to prepare and assess the quality of such. You will also develop methods for data fusion and anonymization. Ultimately your goal will be to extract the best knowledge and insights from data, despite technical limitations and committing with regulatory requirements.

About Innaxis

If not unique, Innaxis is at most not conventional: it is a private independent non-profit research institute focused on Data Science and its applications: most notoriously in aviation, air traffic management and mobility, among other areas.

As an independent entity, Innaxis determines its own research agenda and has now a decade of experience in European research programs with more than 30 successfully executed ones. New projects and initiatives are evaluated continuously and open to new opportunities and ideas proposed within the team.

The Innaxis team consist on a very interdisciplinary group of scientists, developers, engineers and program managers, together with an extensive network of external partners and collaborators, from private companies to universities, public entities and other research institutes.

Skills wanted

Our team work very closely on a daily basis, so a broader knowledge means a much better coordination. Therefore, there is a unique list of skills ideally wanted for both positions. Those skills would be then weighted/assessed as requirements or “bonus points” according to the candidate’s position of interest, i.e. Data Scientist or Data Engineer.

  • University degree, MSc or PhD on Data Science or Computer Science, or related field provided all other requirements are met.
  • No professional experience required, although it might be positively evaluated.
  • Proficient in a variety of programming languages, for instance: Python, Scala, Java, R or  C++ and up to date on the newest software libraries and APIs, e.g. Tensorflow, Theano.
  • Experience with acquisition, preparation, storage and delivery of data,  including concepts ranging from ETL to Data Lakes.
  • Knowledge of the most commonly used software stacks such as LAMP, LAPP, LEAP, OpenStack, SMACK or similar.
  • Familiar with some of the IaaS, PaaS and SaaS platforms currently available such as Amazon Web Services, Microsoft Azure, Google Cloud and similar.
  • Understanding of the most popular knowledge discovery and data mining problems and algorithms; predictive analytics, classification, map reduce, deep learning, random forest, support vector machines and such.
  • Continuous interest for the latest technologies and developments, e.g. blockchain, Terraform,
  • Excellent English communication skills. It is the working language at Innaxis.
  • And of course, great doses of imagination, problem solving skills, flexibility and passion.

Benefits

The successful candidate will be offered a Innaxis’ position as a Data Scientist or Data Engineer, including a unique set of benefits:

  • Being part of a young, dynamic, highly qualified, collaborative and heterogeneous international team.
  • Great flexibility in many aspects -including working hours, compatibilities and location- and most excellent working conditions.
  • A horizontal hierarchy, all researchers’ opinions matter.
  • Long term and stable position. Innaxis is steadily growing since its foundation ten years ago.
  • A fair salary according to the nature of the institute and adjusted to skills, experience and education with continuous revision.
  • Independence, as a non-profit and research-focused nature of Innaxis, the institute is driven by different forces than in the private sector, free of commercial and profit interests.
  • The possibility to develop a unique career outside of mainstream: academics, private companies and consulting.
  • No outsourcing whatsoever, all tasks will be performed at Innaxis offices.
  • An agile working methodology; Innaxis recently implemented JIRA/Scrum and all the research is done on a collaborative wiki/Confluence.

Apply

Interested candidates should send their CV, a research interest letter (around 400 words) and any other relevant information supporting their application to recruitment@innaxis.org You will be contacted further and a personal selection process will start.

 

FDM Raw Data: Why Binary Data and How to Decode It?

Safeclouds-post

Authors: Lukas Höhndorf & Javensius Sembiring (TU Munich)

SafeClouds.eu gathers 16 partners for research collaboration with a wide and diverse group of users, including air navigation services providers, airlines and safety agencies. SafeClouds.eu encourages active involvement from users, as the project aims to apply data science techniques to improve aviation safety. SafeClouds.eu is unique as it involves data combination and collaboration from ANSPs, airlines and authorities in order to improve our knowledge on safety risks, all while maintaining the confidentiality of the data. This safety analysis requires comprehensive understanding of various data sources, and supports the use case analysis as selected by the users.
The basics of the FDM data, as one of the main data sources for the project, is outlined in this post.

Onboard Recording

A large amount of data is recorded during civil aircraft flights. Apart from the “Flight Data Recorder” that is mainly used for accident investigations (widely known as “Black Box”), there are also recorders for regular operations. These recorders are often called “Quick Access Recorders” (QAR). QAR data is analysed in terms of safety, efficiency and other aspects in Flight Data Monitoring activities for airlines and is furthermore an integral part of the research project SafeClouds.eu.
image2017-7-14 17_54_24

Figure 1: Example for a QAR (Source: https://www.safran-electronics-defense.com/aerospace/commercial-aircraft/information-system/aircraft-condition-monitoring-system-acms)

Aircraft are very complex systems with a large number of sensors constantly recording measurements. Important parameters regarding the aircraft state, including position, altitude, speed, engine characteristics and many others are recorded by the QAR. Depending on the aircraft type and airline, the number of recorded parameters can reach several thousand.

As a digital device, the recording uses binary format. In other words, if we look at the QAR data we would only see a bit stream, i.e. a sequence of 0 and 1. In order to use the data and investigate, for example the aircraft position, two additional components are necessary. First, logic is needed to determine how the data is written into the bit stream. This is given by an ARINC standard and two versions are presently used: ARINC 717 standard is used for older aircraft types and the ARINC 767 is used for newer aircraft types. Second, a detailed description of the location of any considered parameter in the bit stream is needed. This is given by a “dataframe” which is a text document of up to several hundreds of pages.

image2017-7-14 17_54_34

Figure 2: Overview (Source: “Flight Data Decoding used for Generating En-Route Information based on Binary Quick Access Recorder Data”, Master thesis, Nils Mohr, Technical University of Munich)

File Formats

One of the advantages of data stored in binary format is storage efficiency. The size of the same flight data file stored in binary format compared to being stored in engineering values (e.g. in a CSV file) might be ten times smaller. Considering the research project SafeClouds.eu or the shared framework for flight data such as ASIAS of the FAA, FDX of IATA or Data4Safety of EASA which collects millions of flight data, an efficient storage is obviously needed.

However, storing flight data in binary format then requires an efficient way to transfer the binary data into engineering values. Considering the bit stream logic, two parts are necessary. First, the bit stream logic (provided by the ARINC standard) needs to be represented in a decoding algorithm. Second, the dataframe information, i.e. which parameter can be found in which part of the bit stream needs to be accessible to the decoding algorithm.

Decoding

Recorded parameters have different characteristics. For example, they can be numeric, alphanumeric or characters. Depending on these characteristics, different decoding rules have to be applied. As an example, a temperature recording of 36.5 °C with a linear conversion rule is considered in the following figure.

image2017-7-14 17_54_46

Figure 3: Simple Decoding Example (Source: “Flight Data Decoding used for Generating En-Route Information based on Binary Quick Access Recorder Data”, Master thesis, Nils Mohr, Technical University of Munich)

Starting from the bit stream, just specific binary values are relevant for the temperature recording. As mentioned above, this information can be found in the dataframe. The combination of all bits leads to a number in the binary system, which can then be transferred into the associated decimal value. Applying the conversion rule for linear parameters gives the result 36.5. Information about these rules as well as the unit, in this case degree Celsius, can be found in the dataframe.

Summary

The data that is recorded by civilian aircraft in their daily operation contains valuable information that can be used for airline safety analyses. Due to the nature of the recording, the data is generated in binary format. To make the data accessible and readable for the analysts, a decoding algorithm is applied. For the development of this algorithm, information about the recording logic and for all the considered parameters must be available.

Author: Lukas Höhndorf (TU Munich)

VISTA: priorities and building a credible model

Setting priorities and building a credible model

In Vista, capturing the level of development of the ATM system in the 2035 and 2050 horizons is critical, and we need to ensure that the most relevant scenarios for stakeholders are prioritised during the project. A consultation with relevant expert stakeholders has been conducted to help us with these tasks. The consultation focused on obtaining the experts' view on key aspects of the project, namely: identification of potential missing metrics for the different stakeholders; prioritisation of the metrics generated by the model; identification of potentially missing factors and possible values considered for them; ranking of foreground factors (see previous blog) by relevance; ensuring that none of the factors identified as background factors should instead be considered as foreground; prioritisation of background scenarios and identification of the level of maturity of the system for 2035 and 2050 and, finally, understanding which particular results produced by Vista would be of particular interest to experts and stakeholders. The consultation questionnaire comprised twelve detailed questions and was targeted at high-profile experts in the ATM field.

The result of this activity allowed us to prioritise the metrics and scenarios that will be modelled and ensured that we had not missed any relevant source for regulations or technical evolution of the system. A second consultation is planned in order to review the firsts results obtained with the model. With these consultations, Vista maximises its impact on the community, addressing the topics that are relevant to stakeholders and validating the results obtained.

Another strength of Vista is the inclusion of key stakeholders, not just as consultation body, but as core partners in the project. Vista benefits from such partnership with airlines (SWISS, Norwegian and Icelandair), a FABEC ANSP (Belgocontrol) and airport experts (EUROCONTROL). Dedicated site visits have been carried out in Reykjavik, Oslo and Zurich to further understand the airlines' business models, needs and projected system evolution. These visits also allowed the modelling team of Vista to have first-hand access to the strategic, pre-tactical and tactical management of airlines' operations. This access ensures that the model captures the impact of the different factors as closely as possible to reality. Moreover, the airlines' involvement in the project provides crucial data and validation of preliminary results. Similarly, planned meetings in Brussels and London with Belgocontrol and EUROCONTROL will ensure that the vision of ANSPs and airports are properly considered in the model.

 

Moving the people to the terminal? Why not move the terminal to the people?

Dataset-post

The question of ground access to airports is the object of many studies. How do we get people to the airport quickly, efficiently and sustainably? A previous blog post touched on the many different means people use to accomplish this part of their journey.

One of the major options that is often pursued is the creation of a train line joining the airport to its host city. However, while this city is often the most frequent origin/destination of travellers using the airport, it by no means accounts for the majority of surface access/egress journeys.

For example, fewer than 54% of access/egress journeys to/from London Heathrow (LHR) come from the whole of Greater London, much less from the central part London that is served by the Underground and the Heathrow Express train. In fact, the Express trains between them only account for 13% of terminating passengers which, while certainly not negligible, leaves seven out of every eight passengers to take a different mode of transport. 61% of passengers to LHR use private transport. After all: who want’s to join all of the congestion travelling into the centre of a big city from the suburbs (or beyond) where they live, just to get the train out to the airport?

So what’s the solution? Heathrow Airport Ltd (HAL) is pursuing a plan as part of its “Heathrow 2.0” initiative to ensure that the 100 largest cities in the UK are linked to LHR by train with no more than one connection. More rail access will be available when the “Crossrail” line that will run between Brentwood and Reading through London has been completed.

We can also make it easier to park, reducing time to do so and the walking time needed to catch the shuttle to the airport. Stanley Robotics, a French startup, is proposing a robotic valet that will pick your car up at the entrance to the car park and park it for you, fetching it for you when you return.

Enter the toast-rack.

Airports like LHR are moving to the “toast-rack” layout. With the creation of Terminal 5 (T5), satellite terminals (5B and 5C) are placed between, and perpendicular to the runways, fed from the main terminal (5A) at the end. An air-side underground train takes passengers from the check-in and security in 5A to the satellites.

 Annotated LHR

The new “Queen’s Terminal” will eventually have the same design.

But when T5 was built both the Underground Piccadilly line and the Heathrow Express were extended – parallel to the air-side underground train serving the satellites – to bring passengers land-side to 5A. Move them west to move them back east – not very efficient!

Why not move the terminal, instead of the people?

Isn’t it time we started re-thinking how we design our airports? Is there any reason why the terminal needs to be where the runways are? Previously, it has been possible to check your bags in at a railway terminal before boarding your train/bus to the airport (at London Victoria for Gatwick, for example) but this is not really moving the terminal to the people.

Crossrail

With Crossrail, a new underground branch line is being built from the London-Reading line (access from London only) to LHR, bringing more land-side passengers (but only some of them – many will still come by car) – but inconveniently not serving T5. Now if Terminal 5 check-in, baggage claim, security, etc. had been constructed on the London-Reading line, the same investment could have paid for an air-side line that would link in with the T5 air-side line and carry every passenger to 5A, 5B, 5C and beyond.

Imagine a terminal only a few kilometres from the runways, where the train line and the motorway access already exists. For LHR, this could have been at Iver, in an area served directly by the existing train line (and the same two motorways as LHR is) but with enough room for all of the airport’s needs with space for a complete airport business, hotel and shopping city without anything being bounded by runways, taxiways, gates, service areas etc. The car parks could have been right next to the terminal instead of, as is the case with T5, being so far away that the airport has had to spend £30m to provide personal transport “Pods” to transfer people to and from the terminal.
Annotated Iver

This air-side line could even be extended to a second or third terminal, closer to other access points to the city. In the case of a multi-airport city like London, one could even envisage an air-side railway linking all of the air-sides (Gatwick, Heathrow, Luton and Stansted), and their displaced terminals, enabling passengers to use the terminal nearest to them and to fly from the runway of their airline’s choice.

Complete separation

Having only air-side activities where the runways are and leaving the land-side activities where the people are is a much cleaner solution that reduces airport access time, and provides more space for land-side activities. And once the concept of separating the land-side from the air-side is accepted, the air-side/runways can be located somewhere where fewer people will be annoyed by the noise. If staff and passengers have to use a land-side terminal access miles from the air-side, the pressure to live near the airport for easy access would move away from the runways, causing less encroachment into noise-impacted areas.

For Heathrow, it’s too late to think of implementing such a system now; the investment in Heathrow Express, the Crossrail link, T5, the Queen’s Terminal, the Pods, etc. has already been made.

But when the next new airport is built, perhaps it would be worthwhile to think that, instead of annoying both travellers and residents by building terminals and runways together, it would be much better to put the terminals close to the people and the runways far away from them.

Author: Pete Hullah (EUROCONTROL) as part of DATASET2050 project

Connect with us!