The Study Group Mathematics with Industry (SWI) is a combined industrial–academic workshop where mathematics is used to tackle problems presented by companies and other organisations outside academia. Roughly between fifty and eighty mathematicians, both from industry and academia, gather during the week to collaborate intensively on industrial problems.

The format follows the original Oxford model, dating back to 1968, which is used worldwide in similar study groups. Companies present a selection of problems on the Monday. The participants devote the entire week to solving these problems in smaller groups. Each group presents their work on the Friday.

Instances of the SWI have been held almost annually in the Netherlands since 1998: [+/-].

The sampling density function in Compressed Sensing can best be optimized once the k-space density is known for the object under investigation. This is characterized by an MRI scan (fast pre-acquired data set), from which the k-space extent is estimated by a Fourier Transform. In addition, the data is acquired with multiple, spatially-localized, coil elements which impose a weighting function to k-space.

The optimal k-space trajectory for the required sampling density in Compressed Sensing in multi-coil acquisitions must also be constrained to match available gradient system capabilities: gradient amplitude, slew rate, and bandwidth (6kHz for example). The target is development of a demonstrator algorithm to derive the most efficient object specific k-space trajectory for Compressed Sensing. Further k-space weighting may be imposed by relaxation effects, a.k.a. T2/T2* effects. Such additional weighting could potentially be taken into account after development of the basic algorithm.

We are interested in 2 flavours:

samples constrained on a Cartesian grid,

free traversal of 3D k-space.

The envisaged algorithm should work 'real time' with our PDF, i.e. complete within say 15 ms on state of art Intel hardware.

'Statistical modelling of mechanical bearing life testing' [+/-]

The goal is to model and optimize bearing life testing time under constraints. The constraints are from various kinds:

Number of available test machines (each machine has 2 test positions): K

Number of life tests to be run: N

Statistical distribution assumed for individual bearing life: Weibull (L10,β)

Assessed precision: expected maximum ratio between confidence bounds on life parameters

The parameters that need to be optimized are:

The number of samples / machines per each test (can differ from one test to another)

The order of the tests

The individual test strategy (censoring, replacement...)

Bias correction method for the Maximum Likelihood Estimation used for the confidence bounds calculation (when deviating from the type II censoring scheme)

In order to assess the last point, Monte Carlo simulations are needed, where random test data following specific test scenarios should be generated. The impact on the final result of the bias due to the generated test data should be quantified as well as a feasible strategy to minimize this effect.

'Synchronizing numerical models of the atmosphere to improve weather and climate predictions' [+/-]

Systems as diverse as clocks, singing crickets, cardiac pacemakers, firing neurons and applauding audiences exhibit a tendency to operate in synchrony. The phenomenon that dynamical systems synchronise their behaviour by some form of information exchange is known as synchronisation. In recent years, the concept of synchronisation is being applied in weather and climate research.

Observed synchronous behaviour of different regional climates is being studied by regarding the climate system as a hierarchy of many coupled subsystems with complex non-linear feedbacks and forcings.

In weather prediction one tries to reconstruct the total state of the atmosphere on the basis of a relatively small set of observations that are spread in space and time. A dynamical model of the atmosphere is used to interpolate between the datapoints. This process is called data-assimilation. But from another perspective it can be viewed as an attempt to synchronise the model state with the state of the atmosphere by exchanging state information.

By connecting different atmosphere models a new, synchronised solution emerges that by training the connections can be closer to the observed solution than any of the individual model solutions. This approach is called super modelling and has the promise to improve weather and climate predictions. KNMI has been pioneering the super modelling approach and interesting mathematical problems have emerged from this research.

Through trial and error it has been found that often just a part of the state vector needs to be exchanged between the models in order for the models to synchronise on a common solution. But which phase space coordinates are most effective? So far only linear nudging terms have been considered as the mathematical form of the connections but there might be other forms of connections that are more effective. For data-assimilation purposes it would be very helpful to know where and what should be measured in order to achieve the best synchronisation between the model state and the real atmosphere.

It has been found that when models are strongly connected, all models synchronise their behaviour. For very weak connections the models each trace their own attractor. For intermediate values of connection strengths however the models synchronise on solutions that are very different from the individual solutions and in some cases a fixed point of periodic solution is found. Why does this happen and why for these intermediate connection strengths?

What is the best training strategy for the connections of the super model so that the synchronised solution is closest to the truth? We have pioneered a few approaches but fresh ideas are welcome!

'Mobility profiling from Smartphone sensordata: confidently know how people travel' [+/-]

Our SWI problem formulation focusses on advancing the quality of our data derivation. Mobidot infers the route, role, objective and mode of transportation from Smartphone data. Smartphones possess a variety of sensors, including GPS, mobile telephone (4G) and wi-fi signals, accelerometer-based mode, etc. that can be used to determine the motion and position of the user, when coupled with geographic databases.

The continual monitoring and recording of data from Smartphone sensors and the comparison with online geographic databases strains battery usage, and hence a sensing strategy must be devised to optimize information gathering with minimal energy usage. The first objective of the SWI problem is to optimize data measurement quality against battery usage. Sub-objectives:

Devise an optimal scheduling plan for sensing, for instance, regularly via a Smartphone in `heartbeat' operating mode that detects changes in travel patterns. Detection of optional triggers, adjustment of sensing intervals depending on transportation mode inferred.

Develop a method to filter data on the mobile site in such a way that the crucial information is contained, while the redundant information is thrown away, yet maintaining the performance of the trip analysis (route, mode, role, objective).

Develop methods for inferring motion given sparse data (intermittent or incomplete).

The second objective of the SWI problem is to detect obvious errors in the automatically derived role, objective and mode detection as already performed by the central software platform of Mobidot:

Identify, based on the provided dataset including Smartphone sensor data (GPS, telephone and accelerometer), (online) geographic databases, and the already inferred information by Mobidot, what the obvious cases are where the automatic inference is likely to be in error.

Minimize false inferences (`ghost trips'). These are displacements that are detected by the system, but which are not really made by the end-user. A potential reason is GPS drift or a series of inaccurate locations.

Mobidot will provide all relevant and needed data to be able to develop data deduction improvements and test approaches and methods. This includes sample Smartphone multi-sensor data, sensor energy usage stats and samples of resulting anonimised mobility profiles.

'Power Line Route Optimisation in a finite spatial grid' [+/-]

Building new infrastructure such as transmission lines, roads, rail, etc. is always a source of controversy. High voltage transmission lines blight the landscape they are constructed on, bringing social, environmental and economic detriment to the areas they run through, equally they form the fabric of modern infrastructure enabling efficient transfer of power across the country. As such it is vital that optimal routes are chosen. Optimal routing provides a cost effective, socially acceptable and environmentally friendly route. Determining the route for new line build is a complex issue and routing criteria/ engineering specification vary from place to place. Network Mapping (NM) has experience in the optimum spotting of towers after the route corridor has been defined (vertical routing), this expertise led us to realise that there is potential to reduce saving in the construction of new lines if Network Mapping is also involved in the generation of the route corridor (horizontal routing).
This SWI problem is concerned with the development of systematic and robust methodology for routing new infrastructure, ideally taking into consideration all the relevant factors involved in the siting of power lines in the particular area of interest.

Topographic, societal and other factors would be imported as geospatial datasets. These would be assigned user determined weightings, and for each set of weightings would inform the most appropriate route selection. The initial analysis would consider all possible route choices, under the chosen weighting, and identify the optimum route selection. This is called the 'macro routing stage'. This has currently been developed to a Proof of Concept stage using the A* algorithm, with a modified heuristic which includes a goal seek logic (although we are not tied to maintaining this particular methodology).

The next stage is to use more precise geospatial datasets to inform the route selection, and also considers cost to land (or site a support) at a particular location, and the cost to jump, or oversail conductor over a particular location.

The next step considers joining all the chosen landing locations together by straight lines, thus taking fractions of each spatial location and ensuring the optimal route is still chosen.

The final step is to consider the removal or reduction of the angles through which the power line is being asked to turn at each landing point, weighing up the reduction in materials allowed by this modification against a potentially sub-optimal routing choice.

'Accurate dose delivery for radiation therapy: adapting treatment to daily anatomy' [+/-]

For radiation oncology we treat patients with head and neck tumours over the course of 6-7 weeks. This treatment is usually based on one CT scan prior to treatment. We use that as an input for dose calculations in the patient based on a collapsed cone superposition algorithm. To optimise the dose distribution we use an inverse optimisation algorithm, all in commercial software. Typically, the computation and optimisation times require about 1-2 hours for this patient group. This is translated to settings of the linear accelerator that is used to deliver the dose to the patient.

From experience we know that the daily posture of the patients is not the same as it was during the CT scan. We already have tools to compute the deformation vector field to map one posture to the other. Our request is to devise clever ways to translate this deformation to adjusted settings of the linear accelerator, such that the intended dose is delivered despite the different patient anatomy. The deformations may include tumour shrinkage or swelling of healthy tissue.

Programme and location

The workshop follows a traditional format. Companies present a selection of problems on the Monday. The participants devote the entire week to solving these problems in smaller groups. Each group presents their work on the Friday. Wednesday evening is the conference dinner. On Thursday night there is pizza to fuel preparation of the final presentations.

It is opened by Marjan Oudeman, Chair of CvB UU, on Monday morning at Boothzaal, UBU (Heidelberglaan 3). Working rooms are situated in Hans Freudenthalgebouw (Budapestlaan 6). Closing presentations take place on Friday, back in the Boothzaal.

For quick travel information, see the UBU travel advice; to help in orientation, here is the UBU building, as viewed from the Heidelberglaan bus stop. If needed, a parking pass may be arranged beforehand by contacting swi2015uu@gmail.com. Many participants stay at the Mitland hotel, which is close to stop Oorsprongpark on bus line 28. Note that eduroam is available all over the campus, but make sure to set this up before leaving your home institution.

Registration and participants

The list of registered participants:

Aerts, Nieke, TU Berlin

Akkaya, Tugce, Delft University of Technology

Antonovici, Claudiu-Cristi, Leiden University

Baarsma, Arjen, Utrecht University

Bakri, Taoufik, TNO

Bijlsma, Marcel, mobidot

Bikov, Dusan, University of Stip

Bisseling, Rob, Utrecht University

Blachere, Sebastien, SKF

Bootsma, Martin, Utrecht University / UMC Utrecht

Broeders, Emile, Utrecht University

Bruin, Erik, Utrecht University

Cao, Xiulei, Eindhoven University of Technology

De Leeuw, Bart, Utrecht University

De Weerdt, Elwin, Philips

Di Bucchianico, Alessandro, Eindhoven University of Technology

Frank, Jason, Utrecht University

Gavranovic, Haris, International University of Sarajevo

Geldhauser, Carina, University of Bonn

Gonzales Fuentes, Lee, Vrije Universiteit Brussel

Grasman, Johan, Wageningen University and Research Centre

Hemker, Piet, CWI Amsterdam

Huijssen, Koos, VORtech

Jansz, Margriet, STW

Jha, Rakesh, Eindhoven University of Technology

Kang, Ross, Radboud University Nijmegen

Keane, Michael, Delft University of Technology

Khimshiashvili, Giorgi, Ilia State University

Koolwaai, Johan, Mobidot

Kruseman, Anna, Utrecht University

Kryven, Ivan, University of Amsterdam

Lahaye, Domenico, Delft University of Technology

Leung, KaYin, Utrecht University

Li, Xinru, Leiden University

Meerman, Corine, Leiden University

Morelli, Leonardo, Leiden University

Muller, Tobias, Utrecht University

Munari, Pedro, Federal University of Sao Carlos

Reinhardt, Christian, VU University Amsterdam

Rens, Lisanne, CWI Amsterdam

Richardson, Paul, NM Group

Roccaverde, Andrea, Leiden University

Rottschafer, Vivi, Leiden University

Sbrizzi, Alessandro, UMC Utrecht

Selten, Frank, KNMI

Spitoni, Cristian, Utrecht University / UMC Utrecht

Stojanova, Aleksandra, "Goce Delcev" University of Stip

Stojkovikj, Natasha, "Goce Delcev" University of Stip

Van den Brink, Johan, Philips Healthcare

Van der Veen-Oei, Rosemarie, NWO

Van Heugten, Jasper, Radboud University Nijmegen

Van Kranen, Simon, NKI / Antoni van Leeuwenhoek Hospital

Van Leeuwen, Tristan, Utrecht University

Van Leeuwen, Daphne, CWI Amsterdam

Van Valenberg, Willem, Utrecht University

Varbanov, Zlatko, University of Veliko Tamovo

Williams, Christopher, NM Group

Yan, Dong, Leiden University

Zegeling, Paul, Utrecht University

Zhou, Han, Utrecht University

Zwaan, Ian, Eindhoven University of Technology

Participants were encouraged to register by 14 January for fullest consideration.

Media items

Some photos taken during the event are collected in a gallery. Photo credits due to Margriet Jansz (STW).

The event attracted coverage from RTV Utrecht and Radio EenVandaag: