Exploring the boundary between accuracy and performances in recurrent neural networks

Exploring the boundary between accuracy and performances in recurrent neural networks

When it comes to interpreting streams of data using modern artificial intelligence techniques, such as audio in speech recognition, computational requirements of state-of- the-art models can easily skyrocket and result in huge power requirements. However, accepting a small loss in their accuracy can go a long way in reducing their resource impact. This work explores the boundary between accuracy and performances in such a context. By Alessandro Pappalardo PhD student @Politecnico di Milano   Modern artificial intelligence approaches to problems such as image captioning, i.e. describing content of an image, can already have significant computational requirements. On top of that, scaling such techniques from processing a single data point, e.g. an image, to a sequence of them, e.g. a video, increases their requirements non-linearly. The reason is that interpreting a…
Read More
HyPPO: Hybrid Performance-aware Power-capping Orchestration in containerized environments

HyPPO: Hybrid Performance-aware Power-capping Orchestration in containerized environments

Energy proportionality represents the key aspect in order to reduce the TCO (Total Cost of Ownership) in modern datacenters and on premise systems. HyPPO achieves energy proportionality allowing energy awarness and autonomic management of containerized environment based on Kubernetes. By Marco Arnaboldi PhD student @Politecnico di Milano   In the last decade cloud computing became the go-to choice for companies and developers to deploy, manage and maintain web services at scale. In this context, Docker containers are now becoming the de-facto standard for cloud native application due to their flexibility and their ability to introduce faster development and deployment cycles. Among the various workloads running inside cloud platforms, an interesting category is represented by On-Line Data Intensive (OLDI) workloads. This kind of workloads are typically composed of large deployments with…
Read More
Exploiting FPGA from Data Science Programming Languages

Exploiting FPGA from Data Science Programming Languages

This work presents different methodologies to create Hardware Libraries for FPGAs. This allows data scientists and software developers to use such devices transparently from Matlab, Python and R applications, running on Desktop or Embedded systems. By Luca Stornaiuolo PhD student @Politecnico di Milano   In the last years, the huge amount of available data leads data scientists to look for increasingly powerful systems to process them. Within this context, Field Programmable Gate Arrays (FPGAs) are a promising solution to improve the performance of the system while keeping low the energy consumption. Nevertheless, exploiting FPGAs is very challenging due to the high level of expertise required to program them. A lot of High Level Synthesis tools have been produced to help programmers during the flow of acceleration of their algorithms through…
Read More
DEEP-mon: monitoring your data-center power consumption

DEEP-mon: monitoring your data-center power consumption

Docker containers are spreading and becoming the reference tool for managing applications in the cloud. DEEP-mon efficiently monitors performance and power of containers to enable energy awareness and autonomic management of the next generation of cloud computing systems. By Rolando Brondolin PhD student @Politecnico di Milano   Data-centers and cloud computing are in our everyday life even if we don’t always see them. From the instant message we sent this morning with our smartphone to car sharing, e-commerce and social media, most of our interactions with the web are powered by cloud computing. In this context, performance and energy efficiency are two important aspects of this web revolution. On the one hand, performance means speed and ease of use for the applications that we use in our everyday life. On the…
Read More
Enhancing Personalized Medicine Research with a HUG

Enhancing Personalized Medicine Research with a HUG

Hardware for hUman Genomics (HUG) is a framework that exploits hardware accelerators (FPGAs) to speedup research in the field of personalized medicine. Its high level of abstraction allows researchers and doctors to exploit the potentiality of hardware accelerators to create drugs shaped on the DNA of the individual. By Lorenzo Di Tucci PhD student @Politecnico di Milano In the coming years, human genome research will likely transform medical practices. The unique genetic profile of an individual and the knowledge of molecular basis of diseases are leading to the development of personalized medicines and therapies, but the exponential growth of available genomic data requires a computational effort that may limit the progress of personalized medicine. Within this context, HUG is a novel hardware and software integrated system developed at NECSTLab (Politecnico…
Read More
Arancino: A tasty framework to measure Evasive Malware

Arancino: A tasty framework to measure Evasive Malware

Malware authors have been developing techniques with the purpose of hiding their creations from Analysis  platforms. Hence, we performed a measurement of which of those are used to detect and break analysis systems based on Dynamic Binary Instrumentation (DBI) Tools. By Mario Polino Postdoc researcher @Politecnico di Milano After decades of research and development, the problem of Malware still persists. Certainly, the approach and the motivation behind malware creation and spread are changed. Nowadays there are several platforms and advanced systems that can perform accurate analysis on software and, in particular, on malware samples. For this reason, malware authors have been developing techniques to hide their creations from such platforms. E.g., They conduct some tests to check if the malware sample is running under a virtual machine or if the list,…
Read More