4th International Workshop on Analysis Tools and Methodologies for Embedded and Real-time Systems
July, 9th 2013, Paris, France
In the research area of real-time and embedded systems, the comparison among results achieved by different research efforts is often very difficult or even impossible due to the lack of common tools, data sets or methodologies upon which the comparison is based. For example, different authors use different algorithms for generating random task sets, different application traces when simulating dynamic real-time systems, different simulation engines when simulating scheduling algorithms. To make the problem worse, different research communities (e.g., the real-time, networking, storage, parallel and distributed computing, Service-Oriented, GRID and cloud computing, etc.) often consider the same or very similar problems and scenarios (e.g., performance of multimedia applications) from different but complementary perspectives, and they use very different abstraction models and simulation engines, making it very difficult to build an integrated view and compare approaches among each other.
Research in the field of real-time and embedded systems (and not only) would greatly benefit from the availability of well-engineered, possibly open tools, simulation frameworks and data sets which may constitute a common metrics for evaluating simulation or experimental results in the area. Also, it would be nice to have a possibly wide set of reusable data sets or behavioural models coming from realistic industrial use-cases over which to evaluate the performance of novel algorithms. Availability of such items would increase the possibility to compare novel techniques in dealing with problems already tackled by others from the multifaceted viewpoints of effectiveness, overhead, performance, applicability, and others.
The ambitious goal of the International Workshop on Analysis Tools and Methodologies for Embedded and Real-time Systems is to create a common ground and a community to collect methodologies, software tools, best practices, data sets, application models, benchmarks and any other way to improve comparability of results in the current practice of research in real-time and embedded systems. People from industry are also welcome to contribute with realistic data sets or methods coming from their own experience, which in the midterm may serve as benchmarks for assessing real-time research efforts.
Often the research literature on real-time and embedded systems insists in giving importance mostly to task scheduling, neglecting other practical aspects of the system achitecture that may strongly impact the performance of distributed real-time applications, such as: the presence of shared caches and multiple memory controllers and paths to the main memory in multi-core and multi-processor systems (and particularly in NUMA architectures); network technologies and scheduling, beyond the well-known and well-investigated CAN bus, like point-to-point or standard TCP/IP; disk access and scheduling, along with the possibility to simulate different existing storage technologies (e.g., SSD vs traditional HDs) and architectures (e.g., NAS). Furthermore, it is often very useful if the simulation accounts for some critical elements of the run-time environment software architecture, such as: the device driver architecture of the Operating System; the presence of hypervisors and various virtualization technologies, along with their architecture in handling interrupts and communications; probabilistic models of the impact of factors that may be outside of the control of the system designer, such as: latency and bandwidth variability over open TCP/IP networks, such as the Internet, or over wireless networks; impact of virtualization technologies; workload fluctuations at run-time; etc.
All of the above factors, and surely many others, have non-negligible impact on the responsiveness of distributed real-time systems and applications, and deserve to be accurately modelled and simulated when evaluating novel mechanisms and comparing approaches among each other, in order to achieve realistic results.
The focus of the 2013 edition of WATERS is on tools, benchmarks, and data sets that are useful for a comprehensive analysis and evaluation of systems where many of the above factors are considered in an integrated way (e.g., including an integrated view on computing, networking, and storage aspects).
The workshop seeks original contributions on methods and tools for real-time and embedded systems analysis, simulation, modelling and benchmarking. We look for papers describing well-engineered, highly reusable, possibly open, tools, methodologies, benchmarks and data sets that can be used by other researchers.
Areas of interest include, but are not limited to:
Submitted papers should follow the IEEE conference format (2 columns, 10 pt, single-line spacing) and should not exceed 6 pages in length. Papers must be submitted in PDF format through the softconf on-line system. The papers will be reviewed by the workshop Program Committee.
If a paper is accepted, at least one author should register for the workshop following indications sent in the notification of acceptance, and present the paper at the workshop in person.
The best papers from the workshop will be invited for being submitted in extended form to a special issue of the Elsevier Journal of Systems Architecture. The extended papers will undergo a new peer-reviewing process.