Pagina principale Attività di ricerca Pubblicazioni Presentazioni Progetti di tesi per LS Corsi Tutoraggio Hobby & tempo libero Scrivetemi Questo sito usa
Google Analytics
Sito aggiornato al
28 Ottobre 2024

Argomenti di tesi disponibili

Ecco una lista degli argomenti di tesi disponibili, nelle aree di ricerca di mia competenza, per studenti di Laurea Magistrale/Specialistica in Ingegneria Informatica o Informatica o corsi di laurea affini (ad es., la LM in Informatica e Networking congiunta fra Università di Pisa e Scuola Sant'Anna), interessati a svolgere il progetto di tesi presso il Real-Time Systems Laboratory (ReTiS) della Scuola Superiore Sant'Anna di Pisa.

Se siete interessati ad uno degli argomenti disponibili, mandatemi pure un'e-mail.

Per una lista dei progetti di tesi completati, fate riferimento alla pagina dedicata.

Massively distributed benchmark for server-less service chains with a micro-services based architecture
La descrizione è solamente disponibile in lingua Inglese.

Description

Micro-services based architectures leveraging on a serverless deployment paradigm are becoming increasingly popular as core design principles for building distributed software and services in Cloud infrastructures. Native Cloud Computing applications are increasingly often realized as a composition of a core of micro-services realizing the main application, plus a plethora of elastically scalable services made available within a Cloud Platform-as-a-Service infrastructure, and deployed through a server-less run-time, which relieves developers and operators from the burden of managing explicitly virtual machines or containers. In these systems, a single request from a client is translated typically into dozens if not hundreds of "horizontal" interactions among micro-services deployed throuhout one or more data centers, within a Cloud Provider infrastructure.
In this context, it becomes important to have availability of a tool useful to benchmark different solutions in terms of architectural designs and their impact on the end-to-end latency of a distributed application. The open-source distwalk tool tries to move the first steps towards having available such a system, and it has been used in a number of recent research works published at conferences on Cloud Computing.
This thesis proposal deals with realizing a set of extensions to the code base of distwalk, in order to add a number of additional features that would make the tool more useful and more usable in a number of different contexts. A non-exhaustive list of the possible extensions include: integration of additional transport protocols besides TCP or UDP, such as HTTP, DPDK or XDP; use of polling-based operations and kernel bypass techniques and their impact on energy consumption; support for more complex distributed computing topologies, like arbitrary DAGs; addition of probabilistic features in the workload generation capabilities; experimental comparison of different threading and synchronization models and their impact on the end-to-end latency on massively parallel multi-core machines, in a variety of usage scenarios, including deployment in virtual machines or containers in Kubernetes or OpenStack, or within a variety of cloud instances and under a variety of networking configurations within a public cloud provider infrastructure such as AWS EC2 or Google GCE; horizontal scaling capabilities of the framework. Additional fancy features might include even a visual environment for editing workloads, launching experiments, and visualizing the obtained results in a number of different plot types (e.g., based on Eclipse, or simply written as a Gtk and/or Kde application). An additional idea might be the one to realize a set of Java client/server tools that implement exactly the same protocol, so to be able to emulate cross-language distributed service topologies.

Requirements

The student should be fluent with the C (and/or Java) programming language under Linux, and be familiar with multi-threaded parallel and distributed processing techniques.

Benefits

The student will have a deep dive on tackling challenging problems faced by cloud application developers to design and build distributed systems that are capable of handling requests at scale, optimizing the design for minimum per-request overheads. Many of these challenges are at the core of a number of research streams being investigated and published in nowadays' research literature on cloud computing.

Background Readings

Notes

Some of the above features have been realized by Tommaso Burlon as part of his thesis in Information Engineering at Scuola Sant'Anna, notably the support for timeouts and retransmissions, the use of UDP as transport, and some ability to perform asynchronous requests. The changes have been integrated in the mainline tool.
Optimum placement of Kubernetes PODs/containers for NFV large-scale deployments
La descrizione è solamente disponibile in lingua Inglese.

Description

Container technology is becoming increasingly important in a number of cloud computing domains, like Network Function Virtualization (NFV), where popular open-source orchestrators like Kubernetes may be used to deploy a number of Virtualized Network Functions (VNFs) across NFV data centers. In a large infrastructure like the one of a network operator, we may have to deploy thousand components across a large geographically distributed infrastructure, in the form of containers. Therefore, it becomes essential to design a mechanism for intelligent placement of containers across the physical resources, so to optimize a number of key metrics of interest for the network operator. These may include cost, performance, latency, energy efficiency and sustainability.
This thesis proposal deals with realizing an optimum resource allocator for Kubernetes, that is capable of optimizing the placement of a number of instances across the infrastructure, for capacity management and planning purposes.

Requirements

The student should be fluent with the Go and Python programming languages, and have familiarity with general optimization techniques.

Benefits

The student will have a deep dive on tackling challenging problems faced by network operators for optimizing their NFV infrastructure, playing with an increasingly popular open-source cloud orchestrator like Kubernetes.

Industrial Collaborations

The student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of a long-standing international collaboration going on between Scuola Sant'Anna and the Vodafone network operator.

Background Readings

Deploying complex Kubernetes services with end-to-end latency control
La descrizione è solamente disponibile in lingua Inglese.

Description

Kubernetes is gaining popularity as a container management and orchestration engine, in a number of cloud computing domains. In private cloud computing scenarios, such as Network Function Virtualization (NFV), Kubernetes may be deployed on a set of physical resources to manage a number of Virtualized Network Functions (VNFs), to be deployed as a set of elastic services that may be scaled dynamically according to the dynamically changing demand.
When complex distributed applications need to be deployed in such an infrastructure, existing container (POD) scheduling and allocation mechanisms fall short at considering the full set of requirements that may need to be satisfied by the deployment, in order to meet precise performance and reliability requirements. This is especially true when managing time-critical NFV services, as needed in 5G stacks supporting Ultra-Realiable Low-Latency Communication scenarios (URLLC), as needed for smart factories and industrial automation, healthcare, intelligent transportation and novel virtual & augmented reality interaction scenarios.
This thesis proposal deals with realizing a modification to the Kubernetes POD allocation logic, so that it is possible to orchestrate the deployment of complex multi-POD components, with a fine-grained control on the expected end-to-end latency of the instantiated services, combining the possible use of a number of mechanisms, including real-time CPU scheduling, platform configuration and tuning at the operating system level, QoS-aware an high-performance networking, QoS-aware access to persistent data stores.

Requirements

The student should be fluent with the Go programming language, and have familiarity with general optimization techniques.

Benefits

The student will have a deep dive on a hot-topic in the development of cloud-related software, and gain the chance to develop a key tool helping to improve the features exposed by Kubernetes to time-sensitive cloud services.

Collaborations

The student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of an international collaboration going on between Scuola Sant'Anna and Ericsson.

Background Readings

Automata-based run-time verification of code in the Linux kernel
La descrizione è solamente disponibile in lingua Inglese.

Description

Linux is gaining popularity as an operating system in a number of time-critical and safety-critical domains like automotive or railways. However, one of the critical elements still obstructing its use in said scenarios is the one of the complexity of its kernel with million lines of code, which makes it quite difficult to gain the necessary certifications.
This complexity may be tackled by the use of formal methods, and an increasingly promising area is the one of run-time verification, where automata-based models of various excerpts of the code base can be composed and analyzed, verifying that the run-time behavior complies with said models.
This thesis proposal deals with realizing an open-source tool for the description of automata and their composition, and their integration with a framework for run-time verification of code which is being actively developed by Red Hat for the Linux kernel.

Requirements

The student should be fluent with the C/C++ programming language. Some knowledge of, and experience with, Qt or other GUI subsystems is desirable. Students of a MSc degree in computer engineering or computer science are suitable to undertake this thesis project.

Benefits

The student will have a deep dive on a hot-topic in the development of time-critical and safety-critical software, and gain the chance to develop a key tool helping to improve an automata-based run-time verification toolchain for the Linux kernel.

Collaborations

The student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of an international collaboration going on between Scuola Sant'Anna and Red Hat.

Background Readings

Adaptive high-performance networking
La descrizione è solamente disponibile in lingua Inglese.

Description

High-performance networking primitives based on kernel-bypass, such as DPDK, are receiving an increasing attraction across industry practitioners and academics, thanks to their capability to realize higher throughput and lower latencies, than achievable with traditional socket-based primitives requiring the OS intevention for the transmission of each packet or batch.
However, the achievable performance points are strictly depending on how many CPUs are dedicated on the platform to the switching logic among multiple entities that need to communicate. Said logic becomes a critical part of the system, constituting a potential bottleneck for techniques of this kind. The consequent computational requirements, as well as their associated power consumption levels, may turn out to be excessive, during periods in which the hosted services are exhibiting moderate workloads.
This thesis proposal deals with realizing an adaptive high-performance networking switch for DPDK, capable of dynamically switching among a number of modes, including the ability to instantiate additional threads for packet switching and remove them as needed, based on the instantaneous conditions of the system.

Requirements

The student should be fluent with socket-based networking primitives and the use of the C programming language. Some knowledge and experience with parallel programming is desirable. Computer engineering, computer science and telecommunication engineering are all excellent backgrounds to undertake a MSc thesis project on the proposed topics.

Benefits

The student will have a deep dive on efficient software engineering for high-performance networking switches, gaining a practical and hands-on experience on some of the key and hottest technologies for the development of future data-intensive distributed software in the industry of cloud and distributed computing.

Collaborations

The student will have the opportunity to be involved in state-of-the-art research activities being carried out in the context of an international collaboration tackling some among the most important challenges in realizing high-performance networking services.

Background Readings

Model-Driven Engineering with multi-core, GPU or FPGA acceleration
La descrizione è solamente disponibile in lingua Inglese.

Description

Model-Driven Engineering and Model-Based Design are gaining momentum in various embedded industrial fields like automotive, railroad, aerospace and others. These techniques involve the use of a number of tools that help system designers and software engineers to carry out the whole software life-cycle of a component or application: from the requirements specification to high-level architecture design, down to low-level components specification and the final implementation phases. The use of MDE/MBD techniques, also enriched by automated code generation tools, promises to reduce the potential gap between the features and the properties of the implemented system, versus the ones that were stated in the initial high-level specifications, including critical non-functional requirements concerning the performance and timeliness of the realized components.
However, the computational requirements of modern cyber-physical systems have grown enormously in the last decade, with the growing interest in deploying complex robot control algorithms requiring on-line optimizations, sophisticated computer vision algorithms for object recognition, trajectory detection and forecasts, and machine learning and artificial intelligence techniques applying data analysis and forecasting as required in predictive maintenance, towards the full potential of the so called Industry 4.0 revolution. All of these algorithms need expensive vectorial and matrix operations that are conveniently accelerated through the use of multi- and many-core general-purpose computing platforms, GP-GPU acceleration or even FPGA acceleration. However, writing software capable to run on a wide heterogeneity of hardware elements is quite cumbersome nowadays.
The AMPERE European Project is tackling these challenges, with a consortium featuring key industrial players in the field of high-performance software for automotive and railroad use-cases, like BOSCH and THALES, and renowned international research centers in the fields of high-performance computing, real-time and energy-efficient systems like the Barcelona Supercomputing Center, the RETIS of Scuola Superiore Sant'Anna in Pisa, the ETH in Zurich and the ISEP engineering institute in Porto.
This thesis proposal deals with extending the open-source APP4MC plugin for Eclipse, supporting the AMALTHEA MDE methodology, for the specification of Runnables with either: a) multi-core acceleration via OpenMP; b) GPU-acceleration via OpenCL; c) FPGA-acceleration via the FRED framework realized at the RETIS.

Requirements

The student should be familiar with modeling languages and frameworks such as UML or AUTOSAR. The student should be fluent in programming in Java and C/C++. Some knowledge and experience with parallel and real-time software programming is desirable. Computer engineering, computer science and electronic engineering are all excellent backgrounds to undertake a MSc thesis project on the proposed topics.

Benefits

The student will have a deep dive on efficient software engineering for parallel and heterogeneous hardware boards, gaining a practical and hands-on experience on some of the key technologies for the development of future software components in the embedded industry.

Industrial collaborations

This thesis proposal is framed in the context of the AMPERE European Project, in which the RETIS has collaborated with renowned industrial players in the field of high-performance software for automotive and railroad use-cases, like BOSCH.

Background Readings

Meccanismi di comunicazione efficienti fra container in infrastrutture per il Cloud Computing e NFV
La descrizione è solamente disponibile in lingua Inglese.
More and more software components and services are deployed nowadays over shared infrastructures, either as available at a public cloud provider, or in-house within private cloud data centres. In this context, OS-level virtualization mechanisms, such as Linux Containers (LXC), Dockers or others, are growing in demand and popularity as deployment and isolation mechanisms, thanks to their increased efficiency in resource usage, when compared with traditional machine virtualization techniques. Containers are becoming a fundamental brick in novel architectures for distributed fault-tolerant components, which are increasingly based on micro-services. This is a development trend where monolithic software is split into a multitude of smaller services, which can be independently designed, developed, deployed and scaled out as collections of containers, enhancing reliability of the overall solution, adding a higher degree of flexibility in the management of the underlying physical resources needed at run-time.
Current middleware solutions for communications among containers involve an extensive use of networking protocols, often based on TCP/IP, HTTP, XML-RPC, JSON-RPC, SOAP or others, for letting different container environments communicate with each other, often leading to an excess of overheads. The purpose of this thesis proposal is the one to investigate on more efficients mechanism, particularly with services that end-up co-located onto the same physical hosts, with a use-case focused on either distributed multimedia processing, or virtualized network functions in a NFV infrastructure.

Requirements

Strong programming skills in C/C++ and Python, solid knowledge of concurrent programming and OS primitives for inter-process communications (IPC) and synchronization.

Benefits

The student will have a good opportunity to refine his/her skills in the above fields, and gain a unique experience with developing distributed services over shared physical infrastructures, building a practical experience on advanced OS concepts, which are fundamental in the ICT (Information and Communications Technology) industry.

Industrial collaborations

This thesis proposal is framed in the context of a long-standing industrial collaboration with Ericsson, Stockholm (Sweden).

Background Readings

Fault-tolerant replication log con caratteristiche real-time ed elevata affidabilità
I servizi di data-base NoSQL stanno prendendo piede nel mondo del cloud & distributed computing come una tecnologia chiave per la realizzazione di applicazioni scalabili e real-time, per permetter loro di memorizzare e recuperare dati rispettando predeterminati requisiti di temporizzazione, consistenza e disponibilità (che possono essere formalizzati in termini di un SLA -- service-level agreement).
Un componente chiave di tale sistema è il replication log, che garantisce una visione consistente della sequenza di operazioni che interessano un data object. Realizzare un replication-log fault-tolerant con elevate garanzie di affidabilità, ma performance predicibili, richiede di affrontare una serie di problematiche in diverse aree, dal software engineering al concurrent programming, sistemi operativi e kernel internal, fino a scheduling CPU e disco.
In questa tesi si propone la progettazione e realizzazione di un fault-tolerant, real-time replication log con funzionalità minime.

Requisiti

Lo studente deve avere ottime capacità di programmazione in C/C++ e/o Java, esperienza con programmazione concorrente e multi-threading, forti competenze in ambito di architetture dei calcolatori e loro impatto sulla performance, sistemi operativi e Linux, ed avere familiarità con lo sviluppo di software distribuito.

Benefici

Lo studente avrà l'opportunità di approfondire le proprie competenze negli ambiti sopra citati, e di acquisire esperienza nella realizzazione di componenti software distribuite, fault-tolerant e real-time, che sono di importanza fondamentale per il mondo del lavoro nell'ambito delle ICT.

Background Readings

Improvements to the SCHED_DEADLINE Linux process scheduler for real-time multimedia
The Linux kernel has been recently enriched with SCHED_DEADLINE, an EDF-based process scheduler that is particularly promising for real-time and multimedia workloads. The scheduler exhibits a minimum set of features, but several extensions are possible for various use-cases. In this project, the student will design and realize extensions suitable to support a specific multimedia-oriented use-case (e.g., when using the JACK or PipeWire architectures for low-latency audio, or the new AAudio API for low-latency audio processing on Android), and will adapt user-space application components to gain advantage of the enriched scheduler.

Requirements

The student shall have strong programming skills in C/C++, experience with concurrent/multi-threaded programming, solid knowledge and understanding of computer architectures and their performance implications, operating systems internals and Linux, and be familiar with developing kernel-level software.

Benefits

The student will have a good opportunity to refine his/her skills in the above fields, and gain a unique experience with developing real-time multimedia-oriented systems.

Industrial collaborations

In this area, we have long-standing industrial collaborations with Arm, Cambridge, UK and RedHat.

Background Readings

Analizzatore di spettro in tempo reale per segnali audio con funzionalità di Intelligenza Artificiale
La descrizione è solamente disponibile in lingua Inglese.
The project consists in realizing a spectrum analyzer for audio signals which applies neural networks in order to recognize common sound patterns. The project may undertake various directions in view of the interests and skills of the candidate. For example, the software might be able to recognize the tones of notes played by an instrument (realizing a real-time sound to MIDI component), or it might recognize different types of sound types or sound patterns, or it might even venture into the land of voice recognition. The project might be realized as a Qt or Gnome desktop application, using the JACK framework for low-latency audio or the Advanced Linux Sound Architecture (ALSA) sound library on Linux, or it might be realized as an Android application for smartphones and tablets using the new AAudio API for low-latency audio processing on Android. For recognition of sounds and/or sound patterns, the project might rely on machine-learning, neural networks and/or traditional optimization techniques.

Requirements

The student shall be fluent in C/C++ and/or Java programming and be familiar with the development of applications with a Graphical User Interface (GUI).

Benefits

The student will gain insightful knowledge about how to build real-time audio processing applications, enhanced with a GUI either on desktop or Android systems.

Background Readings

Predicibilità temporale di applicazioni real-time distribuite e virtualizzate
Negli ultimi anni le tecnologie di virtualizzazione si stanno imponendo come una soluzione efficace per fornire servizi software anche complessi ad applicazioni distribuite. Le suddette tecnologie permettono di astrarre la reale macchina fisica su cui avvengono le elaborazioni, creando un insieme di macchine virtuali (VM) e permettendo quindi di eseguire più di un sistema operativo (con relative applicazioni) sulla stessa macchina fisica. Sfortunatamente, però, le tecnologie di virtualizzazione attualmente esistenti sono spesso inadeguate per supportare applicazioni con vincoli temporali e non permettono di garantire stabilmente all'utente finale dei livelli di qualità del servizio prefissati. Oggigiorno, molte applicazioni distribuite richiedono tempi di risposta limitati e predicibili per poter fornire i propri servizi in modo corretto: ad esempio, applicazioni di realtà virtuale, telepresenza o generalmente per la collaborazione on-line, che richiedono di acquisire, elaborare e visualizzare dati con una temporizzazione abbastanza precisa.
Il problema di garantire una quantità sufficiente di risorse, e con la giusta granularità temporale, a questo tipo di applicazioni diventa ancor più spinoso a causa delle interferenze che possono crearsi fra VM che impegnano risorse diverse, tipicamente di elaborazione e di rete. Ad es., una VM con un traffico di I/O pesante può influenzare negativamente la performance di elaborazione di altre VM.
In questa tesi, si propone di investigare sulle problematiche che impediscono di avere una performance real-time e predicibile di componenti software virtualizzate, nonché di sperimentare alcuni dei meccanismi per l'isolamento temporale all'avanguardia nel mondo dei sistemi soft real-time.

Requisiti.

Ottima conoscenza del linguaggio C, dello stack TCP/IP, e dei cosiddetti "server" nella letteratura degli scheduler real-time. Buona dimestichezza con il Sistema Operativo Linux, interesse per la sperimentazione di feature non standard del kernel.

Benefici.

Lo studente avrà l'opportunità di applicare concretamente alcuni aspetti della teoria dei sistemi real-time, nel contesto estremamente spinoso delle applicazioni real-time distribuite virtualizzate, utilizzando meccanismi per l'isolamento temporale che costituiranno le fondamenta per il supporto alla Quality of Service nei Sistemi Operativi di domani. Inoltre, prenderà dimestichezza con strumenti di virtualizzazione come KVM, che sono alla base delle infrastrutture di rete allo stato dell'arte.
Simulazione di infrastrutture per il cloud computing
Descrizione mancante
Sistemi Operativi e scheduling per sistemi multicore scalabili
I sistemi multicore stanno prendendo piede ad un ritmo incalzante. In un prossimo futuro, il mondo del computing sarà dominato da dispositivi mobili che costituiranno il punto d'accesso ad applicativi completamente distribuiti messi a disposizione remotamente da opportuni provider. Le applicazioni di Cloud Computing di domani faranno largo uso di sistemi massicciamente paralleli, i cosiddetti sistemi many-core, per i quali i Sistemi Operativi di oggi risultano inadeguati per una gestione ottimale delle risorse.
In quest'ambito si propone di investigare su problematiche di scalabilità a livello di kernel di Sistema Operativo. In particolare, le possibilità di lavori di tesi in quest'area sono molteplici:
  • simulazione dell'impatto sulle applicazioni di modelli di kernel innovativi recentemente apparsi in letteratura con obiettivi di scalabilità rispetto al numero di core, che ad esempio impongono un partizionamento delle funzionalità sui core disponibili, riducendo le contese per l'accesso a strutture dati condivise del kernel; la simulazione dovrebbe tener conto dell'impatto della topologia dell'hardware interconnect sulle comunicazioni fra i diversi core, sia esplicite (inter-core interrupt) che implicite (protocolli di coerenza delle cache);
  • algoritmi di scheduling distribuiti che scalino su migliaia di core, con politiche di load-balancing basate su una conoscenza solamente parziale dello stato del sistema; eventualmente, si potrà esplorare la possibilità di utilizzo in tale ambito di concetti dal mondo dei sistemi peer-to-peer, e dei protocolli di gossip;
  • modifiche al kernel del Sistema Operativo Linux per il miglioramento di aspetti legati alla scalabilità in funzione del numero di core disponibili; ad esempio, riprogettazione di alcune strutture chiave del kernel che sono condivise in maniera tale da ridurre la contesa fra i molti core che vi accedono, modifiche allo scheduler e alla logica di load balancing per un maggior disaccoppiamento delle operazioni svolte da ciascun core, partizionamento delle risorse hardware disponibili fra istanze indipendenti (e possibilmente eterogenee) del kernel, ecc.

Requisiti.

In generale, per tutte le tesi che si collocano in quest'area, è necessaria un'ottima conoscenza dei sistemi operativi e delle architetture dei calcolatori. Inoltre, per ciascuna proposta di tesi specifica, possono essere richieste ulteriori conoscenze e capacità individuali.

Benefici.

Lo studente avrà l'opportunità di acquisire competenze ed esperienza nel mondo del calcolo parallelo, della programmazione concorrente e distribuita, del supporto a livello di Sistema Operativo per sistemi massicciamente paralleli, con particolare riferimento alla progettazione di algoritmi di scheduling e primitive di sincronizzazione scalabili ed efficienti.

Pagina principale Attività di ricerca Pubblicazioni Presentazioni Progetti di tesi per LS Corsi Tutoraggio Hobby & tempo libero Scrivetemi Sito aggiornato al
07 Novembre 2024