Main page Research activities Publications Talks MSc thesis projects Courses Mentoring Hobby and spare time Write me This site uses
Google Analytics
Last updated on
28 October 2024

Available MSc thesis topics

This is a list of the available thesis topics, within the scope of my research interests, that may be undertaken by students about to finish their academic career path towards a MSc in Computer Engineering or Computer Science or similar (e.g., the MSc in Computer Science and Networking jointly offered by University of Pisa and Scuola Sant'Anna), who might be interested in developing their MSc thesis project at the Real-Time Systems Laboratory (ReTiS) of Scuola Superiore Sant'Anna in Pisa.

If you are interested into one of the available topics, please send me and e-mail.

For a list of completed thesis projects, please refer to the dedicated page.

Massively distributed benchmark for server-less service chains with a micro-services based architecture

Description

Micro-services based architectures leveraging on a serverless deployment paradigm are becoming increasingly popular as core design principles for building distributed software and services in Cloud infrastructures. Native Cloud Computing applications are increasingly often realized as a composition of a core of micro-services realizing the main application, plus a plethora of elastically scalable services made available within a Cloud Platform-as-a-Service infrastructure, and deployed through a server-less run-time, which relieves developers and operators from the burden of managing explicitly virtual machines or containers. In these systems, a single request from a client is translated typically into dozens if not hundreds of "horizontal" interactions among micro-services deployed throuhout one or more data centers, within a Cloud Provider infrastructure.
In this context, it becomes important to have availability of a tool useful to benchmark different solutions in terms of architectural designs and their impact on the end-to-end latency of a distributed application. The open-source distwalk tool tries to move the first steps towards having available such a system, and it has been used in a number of recent research works published at conferences on Cloud Computing.
This thesis proposal deals with realizing a set of extensions to the code base of distwalk, in order to add a number of additional features that would make the tool more useful and more usable in a number of different contexts. A non-exhaustive list of the possible extensions include: integration of additional transport protocols besides TCP or UDP, such as HTTP, DPDK or XDP; use of polling-based operations and kernel bypass techniques and their impact on energy consumption; support for more complex distributed computing topologies, like arbitrary DAGs; addition of probabilistic features in the workload generation capabilities; experimental comparison of different threading and synchronization models and their impact on the end-to-end latency on massively parallel multi-core machines, in a variety of usage scenarios, including deployment in virtual machines or containers in Kubernetes or OpenStack, or within a variety of cloud instances and under a variety of networking configurations within a public cloud provider infrastructure such as AWS EC2 or Google GCE; horizontal scaling capabilities of the framework. Additional fancy features might include even a visual environment for editing workloads, launching experiments, and visualizing the obtained results in a number of different plot types (e.g., based on Eclipse, or simply written as a Gtk and/or Kde application). An additional idea might be the one to realize a set of Java client/server tools that implement exactly the same protocol, so to be able to emulate cross-language distributed service topologies.

Requirements

The student should be fluent with the C (and/or Java) programming language under Linux, and be familiar with multi-threaded parallel and distributed processing techniques.

Benefits

The student will have a deep dive on tackling challenging problems faced by cloud application developers to design and build distributed systems that are capable of handling requests at scale, optimizing the design for minimum per-request overheads. Many of these challenges are at the core of a number of research streams being investigated and published in nowadays' research literature on cloud computing.

Background Readings

Notes

Some of the above features have been realized by Tommaso Burlon as part of his thesis in Information Engineering at Scuola Sant'Anna, notably the support for timeouts and retransmissions, the use of UDP as transport, and some ability to perform asynchronous requests. The changes have been integrated in the mainline tool.
Optimum placement of Kubernetes PODs/containers for NFV large-scale deployments

Description

Container technology is becoming increasingly important in a number of cloud computing domains, like Network Function Virtualization (NFV), where popular open-source orchestrators like Kubernetes may be used to deploy a number of Virtualized Network Functions (VNFs) across NFV data centers. In a large infrastructure like the one of a network operator, we may have to deploy thousand components across a large geographically distributed infrastructure, in the form of containers. Therefore, it becomes essential to design a mechanism for intelligent placement of containers across the physical resources, so to optimize a number of key metrics of interest for the network operator. These may include cost, performance, latency, energy efficiency and sustainability.
This thesis proposal deals with realizing an optimum resource allocator for Kubernetes, that is capable of optimizing the placement of a number of instances across the infrastructure, for capacity management and planning purposes.

Requirements

The student should be fluent with the Go and Python programming languages, and have familiarity with general optimization techniques.

Benefits

The student will have a deep dive on tackling challenging problems faced by network operators for optimizing their NFV infrastructure, playing with an increasingly popular open-source cloud orchestrator like Kubernetes.

Industrial Collaborations

The student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of a long-standing international collaboration going on between Scuola Sant'Anna and the Vodafone network operator.

Background Readings

Deploying complex Kubernetes services with end-to-end latency control

Description

Kubernetes is gaining popularity as a container management and orchestration engine, in a number of cloud computing domains. In private cloud computing scenarios, such as Network Function Virtualization (NFV), Kubernetes may be deployed on a set of physical resources to manage a number of Virtualized Network Functions (VNFs), to be deployed as a set of elastic services that may be scaled dynamically according to the dynamically changing demand.
When complex distributed applications need to be deployed in such an infrastructure, existing container (POD) scheduling and allocation mechanisms fall short at considering the full set of requirements that may need to be satisfied by the deployment, in order to meet precise performance and reliability requirements. This is especially true when managing time-critical NFV services, as needed in 5G stacks supporting Ultra-Realiable Low-Latency Communication scenarios (URLLC), as needed for smart factories and industrial automation, healthcare, intelligent transportation and novel virtual & augmented reality interaction scenarios.
This thesis proposal deals with realizing a modification to the Kubernetes POD allocation logic, so that it is possible to orchestrate the deployment of complex multi-POD components, with a fine-grained control on the expected end-to-end latency of the instantiated services, combining the possible use of a number of mechanisms, including real-time CPU scheduling, platform configuration and tuning at the operating system level, QoS-aware an high-performance networking, QoS-aware access to persistent data stores.

Requirements

The student should be fluent with the Go programming language, and have familiarity with general optimization techniques.

Benefits

The student will have a deep dive on a hot-topic in the development of cloud-related software, and gain the chance to develop a key tool helping to improve the features exposed by Kubernetes to time-sensitive cloud services.

Collaborations

The student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of an international collaboration going on between Scuola Sant'Anna and Ericsson.

Background Readings

Automata-based run-time verification of code in the Linux kernel

Description

Linux is gaining popularity as an operating system in a number of time-critical and safety-critical domains like automotive or railways. However, one of the critical elements still obstructing its use in said scenarios is the one of the complexity of its kernel with million lines of code, which makes it quite difficult to gain the necessary certifications.
This complexity may be tackled by the use of formal methods, and an increasingly promising area is the one of run-time verification, where automata-based models of various excerpts of the code base can be composed and analyzed, verifying that the run-time behavior complies with said models.
This thesis proposal deals with realizing an open-source tool for the description of automata and their composition, and their integration with a framework for run-time verification of code which is being actively developed by Red Hat for the Linux kernel.

Requirements

The student should be fluent with the C/C++ programming language. Some knowledge of, and experience with, Qt or other GUI subsystems is desirable. Students of a MSc degree in computer engineering or computer science are suitable to undertake this thesis project.

Benefits

The student will have a deep dive on a hot-topic in the development of time-critical and safety-critical software, and gain the chance to develop a key tool helping to improve an automata-based run-time verification toolchain for the Linux kernel.

Collaborations

The student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of an international collaboration going on between Scuola Sant'Anna and Red Hat.

Background Readings

Adaptive high-performance networking

Description

High-performance networking primitives based on kernel-bypass, such as DPDK, are receiving an increasing attraction across industry practitioners and academics, thanks to their capability to realize higher throughput and lower latencies, than achievable with traditional socket-based primitives requiring the OS intevention for the transmission of each packet or batch.
However, the achievable performance points are strictly depending on how many CPUs are dedicated on the platform to the switching logic among multiple entities that need to communicate. Said logic becomes a critical part of the system, constituting a potential bottleneck for techniques of this kind. The consequent computational requirements, as well as their associated power consumption levels, may turn out to be excessive, during periods in which the hosted services are exhibiting moderate workloads.
This thesis proposal deals with realizing an adaptive high-performance networking switch for DPDK, capable of dynamically switching among a number of modes, including the ability to instantiate additional threads for packet switching and remove them as needed, based on the instantaneous conditions of the system.

Requirements

The student should be fluent with socket-based networking primitives and the use of the C programming language. Some knowledge and experience with parallel programming is desirable. Computer engineering, computer science and telecommunication engineering are all excellent backgrounds to undertake a MSc thesis project on the proposed topics.

Benefits

The student will have a deep dive on efficient software engineering for high-performance networking switches, gaining a practical and hands-on experience on some of the key and hottest technologies for the development of future data-intensive distributed software in the industry of cloud and distributed computing.

Collaborations

The student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of an international collaboration tackling some among the most important challenges in realizing high-performance networking services.

Background Readings

Model-Driven Engineering with multi-core, GPU or FPGA acceleration

Description

Model-Driven Engineering and Model-Based Design are gaining momentum in various embedded industrial fields like automotive, railroad, aerospace and others. These techniques involve the use of a number of tools that help system designers and software engineers to carry out the whole software life-cycle of a component or application: from the requirements specification to high-level architecture design, down to low-level components specification and the final implementation phases. The use of MDE/MBD techniques, also enriched by automated code generation tools, promises to reduce the potential gap between the features and the properties of the implemented system, versus the ones that were stated in the initial high-level specifications, including critical non-functional requirements concerning the performance and timeliness of the realized components.
However, the computational requirements of modern cyber-physical systems have grown enormously in the last decade, with the growing interest in deploying complex robot control algorithms requiring on-line optimizations, sophisticated computer vision algorithms for object recognition, trajectory detection and forecasts, and machine learning and artificial intelligence techniques applying data analysis and forecasting as required in predictive maintenance, towards the full potential of the so called Industry 4.0 revolution. All of these algorithms need expensive vectorial and matrix operations that are conveniently accelerated through the use of multi- and many-core general-purpose computing platforms, GP-GPU acceleration or even FPGA acceleration. However, writing software capable to run on a wide heterogeneity of hardware elements is quite cumbersome nowadays.
The AMPERE European Project is tackling these challenges, with a consortium featuring key industrial players in the field of high-performance software for automotive and railroad use-cases, like BOSCH and THALES, and renowned international research centers in the fields of high-performance computing, real-time and energy-efficient systems like the Barcelona Supercomputing Center, the RETIS of Scuola Superiore Sant'Anna in Pisa, the ETH in Zurich and the ISEP engineering institute in Porto.
This thesis proposal deals with extending the open-source APP4MC plugin for Eclipse, supporting the AMALTHEA MDE methodology, for the specification of Runnables with either: a) multi-core acceleration via OpenMP; b) GPU-acceleration via OpenCL; c) FPGA-acceleration via the FRED framework realized at the RETIS.

Requirements

The student should be familiar with modeling languages and frameworks such as UML or AUTOSAR. The student should be fluent in programming in Java and C/C++. Some knowledge and experience with parallel and real-time software programming is desirable. Computer engineering, computer science and electronic engineering are all excellent backgrounds to undertake a MSc thesis project on the proposed topics.

Benefits

The student will have a deep dive on efficient software engineering for parallel and heterogeneous hardware boards, gaining a practical and hands-on experience on some of the key technologies for the development of future software components in the embedded industry.

Industrial collaborations

This thesis proposal is framed in the context of the AMPERE European Project, in which the RETIS has collaborated with renowned industrial players in the field of high-performance software for automotive and railroad use-cases, like BOSCH.

Background Readings

Mechanisms for efficient communications among containers in Cloud Computing and NFV
More and more software components and services are deployed nowadays over shared infrastructures, either as available at a public cloud provider, or in-house within private cloud data centres. In this context, OS-level virtualization mechanisms, such as Linux Containers (LXC), Dockers or others, are growing in demand and popularity as deployment and isolation mechanisms, thanks to their increased efficiency in resource usage, when compared with traditional machine virtualization techniques. Containers are becoming a fundamental brick in novel architectures for distributed fault-tolerant components, which are increasingly based on micro-services. This is a development trend where monolithic software is split into a multitude of smaller services, which can be independently designed, developed, deployed and scaled out as collections of containers, enhancing reliability of the overall solution, adding a higher degree of flexibility in the management of the underlying physical resources needed at run-time.
Current middleware solutions for communications among containers involve an extensive use of networking protocols, often based on TCP/IP, HTTP, XML-RPC, JSON-RPC, SOAP or others, for letting different container environments communicate with each other, often leading to an excess of overheads. The purpose of this thesis proposal is the one to investigate on more efficients mechanism, particularly with services that end-up co-located onto the same physical hosts, with a use-case focused on either distributed multimedia processing, or virtualized network functions in a NFV infrastructure.

Requirements

Strong programming skills in C/C++ and Python, solid knowledge of concurrent programming and OS primitives for inter-process communications (IPC) and synchronization.

Benefits

The student will have a good opportunity to refine his/her skills in the above fields, and gain a unique experience with developing distributed services over shared physical infrastructures, building a practical experience on advanced OS concepts, which are fundamental in the ICT (Information and Communications Technology) industry.

Industrial collaborations

This thesis proposal is framed in the context of a long-standing industrial collaboration with Ericsson, Stockholm (Sweden).

Background Readings

Fault-tolerant replication log with real-time performance and high reliability
NoSQL data-base services are taking momentum in cloud and distributed computing as a key technology enabling scalable and real-time applications to store and retrieve data according to precise timing, consistency and availability requirements (that can be formalized in a SLA -- service-level agreement).
A key component of such a system is the replication log, guaranteeing a consistent view on the sequence of operations to perform on each data object. Realizing a fault-tolerant replication log with high availability and consistency, yet predictable performance, presents a variety of technical challenges spanning across software engineering, concurrent programming, operating systems and kernel internals, including CPU and disk scheduling.
In this thesis, we propose the design and realization of a fault-tolerant, real-time replication log with minimum functionality.

Requirements

The student shall have strong programming skills in C/C++ and/or Java, experience with concurrent/multi-threaded programming, solid knowledge and understanding of computer architectures and their performance implications, operating systems internals and Linux, and be familiar with developing distributed software.

Benefits

The student will have a good opportunity to refine his/her skills in the above fields, and gain a unique experience with developing distributed, fault-tolerant, real-time software components, which are fundamental in the ICT (Information and Communications Technology) industry.

Background Readings

Improvements to the SCHED_DEADLINE Linux process scheduler for real-time multimedia
The Linux kernel has been recently enriched with SCHED_DEADLINE, an EDF-based process scheduler that is particularly promising for real-time and multimedia workloads. The scheduler exhibits a minimum set of features, but several extensions are possible for various use-cases. In this project, the student will design and realize extensions suitable to support a specific multimedia-oriented use-case (e.g., when using the JACK or PipeWire architectures for low-latency audio, or the new AAudio API for low-latency audio processing on Android), and will adapt user-space application components to gain advantage of the enriched scheduler.

Requirements

The student shall have strong programming skills in C/C++, experience with concurrent/multi-threaded programming, solid knowledge and understanding of computer architectures and their performance implications, operating systems internals and Linux, and be familiar with developing kernel-level software.

Benefits

The student will have a good opportunity to refine his/her skills in the above fields, and gain a unique experience with developing real-time multimedia-oriented systems.

Industrial collaborations

In this area, we have long-standing industrial collaborations with Arm, Cambridge, UK and RedHat.

Background Readings

Real-time spectrum analyzer for audio signals empowered by Artificial Intelligence
The project consists in realizing a spectrum analyzer for audio signals which applies neural networks in order to recognize common sound patterns. The project may undertake various directions in view of the interests and skills of the candidate. For example, the software might be able to recognize the tones of notes played by an instrument (realizing a real-time sound to MIDI component), or it might recognize different types of sound types or sound patterns, or it might even venture into the land of voice recognition. The project might be realized as a Qt or Gnome desktop application, using the JACK framework for low-latency audio or the Advanced Linux Sound Architecture (ALSA) sound library on Linux, or it might be realized as an Android application for smartphones and tablets using the new AAudio API for low-latency audio processing on Android. For recognition of sounds and/or sound patterns, the project might rely on machine-learning, neural networks and/or traditional optimization techniques.

Requirements

The student shall be fluent in C/C++ and/or Java programming and be familiar with the development of applications with a Graphical User Interface (GUI).

Benefits

The student will gain insightful knowledge about how to build real-time audio processing applications, enhanced with a GUI either on desktop or Android systems.

Background Readings

Predicibilità temporale di applicazioni real-time distribuite e virtualizzate
Negli ultimi anni le tecnologie di virtualizzazione si stanno imponendo come una soluzione efficace per fornire servizi software anche complessi ad applicazioni distribuite. Le suddette tecnologie permettono di astrarre la reale macchina fisica su cui avvengono le elaborazioni, creando un insieme di macchine virtuali (VM) e permettendo quindi di eseguire più di un sistema operativo (con relative applicazioni) sulla stessa macchina fisica. Sfortunatamente, però, le tecnologie di virtualizzazione attualmente esistenti sono spesso inadeguate per supportare applicazioni con vincoli temporali e non permettono di garantire stabilmente all'utente finale dei livelli di qualità del servizio prefissati. Oggigiorno, molte applicazioni distribuite richiedono tempi di risposta limitati e predicibili per poter fornire i propri servizi in modo corretto: ad esempio, applicazioni di realtà virtuale, telepresenza o generalmente per la collaborazione on-line, che richiedono di acquisire, elaborare e visualizzare dati con una temporizzazione abbastanza precisa.
Il problema di garantire una quantità sufficiente di risorse, e con la giusta granularità temporale, a questo tipo di applicazioni diventa ancor più spinoso a causa delle interferenze che possono crearsi fra VM che impegnano risorse diverse, tipicamente di elaborazione e di rete. Ad es., una VM con un traffico di I/O pesante può influenzare negativamente la performance di elaborazione di altre VM.
In questa tesi, si propone di investigare sulle problematiche che impediscono di avere una performance real-time e predicibile di componenti software virtualizzate, nonché di sperimentare alcuni dei meccanismi per l'isolamento temporale all'avanguardia nel mondo dei sistemi soft real-time.

Requisiti.

Ottima conoscenza del linguaggio C, dello stack TCP/IP, e dei cosiddetti "server" nella letteratura degli scheduler real-time. Buona dimestichezza con il Sistema Operativo Linux, interesse per la sperimentazione di feature non standard del kernel.

Benefici.

Lo studente avrà l'opportunità di applicare concretamente alcuni aspetti della teoria dei sistemi real-time, nel contesto estremamente spinoso delle applicazioni real-time distribuite virtualizzate, utilizzando meccanismi per l'isolamento temporale che costituiranno le fondamenta per il supporto alla Quality of Service nei Sistemi Operativi di domani. Inoltre, prenderà dimestichezza con strumenti di virtualizzazione come KVM, che sono alla base delle infrastrutture di rete allo stato dell'arte.
Simulation of Cloud Computing infrastructures
Missing description
Sistemi Operativi e scheduling per sistemi multicore scalabili
I sistemi multicore stanno prendendo piede ad un ritmo incalzante. In un prossimo futuro, il mondo del computing sarà dominato da dispositivi mobili che costituiranno il punto d'accesso ad applicativi completamente distribuiti messi a disposizione remotamente da opportuni provider. Le applicazioni di Cloud Computing di domani faranno largo uso di sistemi massicciamente paralleli, i cosiddetti sistemi many-core, per i quali i Sistemi Operativi di oggi risultano inadeguati per una gestione ottimale delle risorse.
In quest'ambito si propone di investigare su problematiche di scalabilità a livello di kernel di Sistema Operativo. In particolare, le possibilità di lavori di tesi in quest'area sono molteplici:
  • simulazione dell'impatto sulle applicazioni di modelli di kernel innovativi recentemente apparsi in letteratura con obiettivi di scalabilità rispetto al numero di core, che ad esempio impongono un partizionamento delle funzionalità sui core disponibili, riducendo le contese per l'accesso a strutture dati condivise del kernel; la simulazione dovrebbe tener conto dell'impatto della topologia dell'hardware interconnect sulle comunicazioni fra i diversi core, sia esplicite (inter-core interrupt) che implicite (protocolli di coerenza delle cache);
  • algoritmi di scheduling distribuiti che scalino su migliaia di core, con politiche di load-balancing basate su una conoscenza solamente parziale dello stato del sistema; eventualmente, si potrà esplorare la possibilità di utilizzo in tale ambito di concetti dal mondo dei sistemi peer-to-peer, e dei protocolli di gossip;
  • modifiche al kernel del Sistema Operativo Linux per il miglioramento di aspetti legati alla scalabilità in funzione del numero di core disponibili; ad esempio, riprogettazione di alcune strutture chiave del kernel che sono condivise in maniera tale da ridurre la contesa fra i molti core che vi accedono, modifiche allo scheduler e alla logica di load balancing per un maggior disaccoppiamento delle operazioni svolte da ciascun core, partizionamento delle risorse hardware disponibili fra istanze indipendenti (e possibilmente eterogenee) del kernel, ecc.

Requisiti.

In generale, per tutte le tesi che si collocano in quest'area, è necessaria un'ottima conoscenza dei sistemi operativi e delle architetture dei calcolatori. Inoltre, per ciascuna proposta di tesi specifica, possono essere richieste ulteriori conoscenze e capacità individuali.

Benefici.

Lo studente avrà l'opportunità di acquisire competenze ed esperienza nel mondo del calcolo parallelo, della programmazione concorrente e distribuita, del supporto a livello di Sistema Operativo per sistemi massicciamente paralleli, con particolare riferimento alla progettazione di algoritmi di scheduling e primitive di sincronizzazione scalabili ed efficienti.

Main page Research activities Publications Talks MSc thesis projects Courses Mentoring Hobby and spare time Write me Last updated on
07 November 2024