The History of Programming Languages conference series produces accurate historical records and descriptions
of programming language design, development, and philosophy. It is infrequently held: the first three were
in 1978, 1993, and 2007. It’s now time for HOPL IV.
Welcome to the 2021 ACM SIGPLAN International Symposium on Memory Management (ISMM). ISMM is a premier forum for research in memory management and solicits papers from areas including but not limited to:
Memory system design and analysis
Hardware support for memory management
Memory management for large-scale data-intensive systems
Novel memory architectures
Memory management at datacenter and cloud scales
Garbage Collection algorithms and implementations
Formal analysis and verification of memory management algorithms
Compiler analyses to aid memory management
Tools to analyze memory usage of programs
Memory allocation and de-allocation
Empirical analysis of memory intensive programs
Formal analysis and verification of memory intensive programs
Memory management for machine learning systems
Programming and management of emerging or persistent memories
Welcome to the 2021 edition of the International Conference on Languages Compilers,
Tools and Theory of Embedded Systems. LCTES provides a link between the programming languages and the
embedded systems communities. Researchers and developers in these areas are addressing many similar problems,
but with different backgrounds and approaches. LCTES is intended to expose researchers and developers from either
area to relevant work and interesting problems in the other area and provide a forum where they can interact.
Array-oriented programming unites two uncommon properties. As an abstraction, it directly mirrors high-level mathematical abstractions commonly used in many fields from natural sciences over engineering to financial modeling. As a language feature, it exposes regular control flow, exhibits structured data dependencies, and lends itself to many types of program analysis. Furthermore, many modern computer architectures, particularly highly parallel architectures such as GPUs and FPGAs, lend themselves to efficiently executing array operations.
The ARRAY series of workshops explores:
formal semantics and design issues of array-oriented languages and libraries;
productivity and performance in compute-intensive application areas of array programming;
systematic notation for array programming, including axis- and index-based approaches;
intermediate languages, virtual machines, and program-transformation techniques for array programs;
representation of and automated reasoning about mathematical structure, such as static and dynamic sparsity, low-rank patterns, and hierarchies of these, with connections to applications such as graph processing, HPC, tensor computation and deep learning;
interfaces between array- and non-array code, including approaches for embedding array programs in general-purpose programming languages; and
efficient mapping of array programs, through compilers, libraries, and code generators, onto execution platforms, targeting multi-cores, SIMD devices, GPUs, distributed systems, and FPGA hardware, by fully automatic and user-assisted means.
Array programming is at home in many communities, including language design, library development, optimization, scientific computing, and across many existing language communities. ARRAY is intended as a forum where these communities can exchange ideas on the construction of computational tools for manipulating arrays.
This second edition of the Infer Practitioners Workshop gathers together developers and researchers working with the Infer static analysis platform. Infer enables anyone to write their own scalable inter-procedural static analysis for C, C++, Objective-C, and Java source code in only a few lines of code. Infer is deployed at several companies where it helps developers write better code. Inside Facebook, thousands of code changes are analysed every month by Infer, leading to thousands of bugs being found and fixed before they reach the codebase. Infer is also being used in academia, both as a research tool and a teaching medium.
The goal of the workshop is to share knowledge about how to use and modify Infer in industrial and academic contexts.
Due to recent algorithmic and computational advances, machine learning has seen a surge of interest in both research and practice. From natural language processing to self-driving cars, machine learning is creating new possibilities that are changing the way we live and interact with computers. However, the impact of these advances on programming languages remains mostly untapped. Yet, incredible research opportunities exist when combining machine learning and programming languages in novel ways.
This symposium seeks to bring together programming language and machine learning communities to encourage collaboration and exploration in the areas of mutual benefit. The symposium will include a combination of rigorous peer-reviewed papers and invited events. The symposium will seek papers on a diverse range of topics related to programming languages and machine learning including (and not limited to):
Application of machine learning to compilation and run-time scheduling
Collaborative human / computer programming (i.e., conversational programming)
Deterministic and stochastic program synthesis
Infrastructure and techniques for mining and analyzing large code bases
Interoperability between machine learning frameworks and existing code bases
Probabilistic and differentiable programming
Programming language and compiler support for machine learning applications
Programming language support and implementation of machine learning frameworks
The Second International Workshop on Programming Languages for Quantum Computing (PLanQC 2021) aims to bring together researchers from the fields of programming languages and quantum information, exposing the programming languages community to the unique challenges of programming quantum computers. It will promote the development of tools to assist in the process of programming quantum computers, both those that exist today and those likely to exist in the near to far future.
Workshop topics include (but are not limited to):
High-level quantum programming languages
Verification tools for quantum programs
Novel quantum programming abstractions
Quantum circuit and program optimization
Hardware-aware circuit compilation and routing
Error handling, mitigation, and correction
Instruction sets for quantum hardware
Other techniques from traditional programming languages (e.g., types, compilation/optimization, foreign function interfaces) applied to the domain of quantum computation.
The Programming Language Mentoring Workshop is designed to broaden the exposure of late-stage undergraduate students and early-stage graduate students to research and career opportunities in programming languages. The workshop program will include technical sessions that cover both the history and current practice of core subfields within programming languages, mentoring sessions that cover effective habits for navigating the research landscape, and social sessions that create opportunities for students to interact with researchers in the field.
Static and dynamic analysis techniques and tools for Java, and other programming languages, have received widespread attention for a long time. The application domains of these analyses range from core libraries to modern technologies such as web services and Android applications. Over time, various analysis frameworks have been developed to provide techniques for optimizing programs, ensuring code quality, and assessing security and compliance.
SOAP 2021 aims to bring together the members of the program analysis community to share new developments and shape new innovations in program analysis. For SOAP 2021, we invite contributions and inspirations from researchers and practitioners working with program analysis. We are particularly interested in exciting analysis framework ideas, innovative designs, and analysis techniques, including preliminary results of work in progress. We will also focus on the state of the practice for program analysis by encouraging submissions by industrial participants, including tool demonstration submissions. The workshop agenda will continue its tradition of lively discussions on extensions of existing frameworks, development of new analyses and tools, and how program analysis is used in real-world scenarios.
Emerging non-volatile memory (NVM) technologies provide fast access to persistent data (guaranteed to endure power failures/crashes) at a performance comparable to volatile memory (RAM). NVM (a.k.a. persistent memory) is believed to supplant RAM in the near future, leading to substantial changes in software and its engineering.
However, the performance gains of NVM are difficult to exploit correctly. A key challenge lies in ensuring correct recovery after a crash by maintaining the consistency of the data in persistent memory. This requires an understanding of the underlying (weak) persistency model, describing the order in which stores are propagated to NVM. The problem is that CPUs are not directly connected to memory; instead there are multiple non-persistent caches in between. Consequently, memory stores are not propagated to NVM at the time and in the order issued by the processor, but rather at a later time and in the order decided by cache coherence protocols.
In this tutorial, we demonstrate three facets of persistency research:
We present the formal persistency semantics of the ubiquitous Intel-x86 and ARMv8 CPU architectures.
We describe common programming patterns for implementing higher-level persistent libraries (e.g. transactions).
We present persistent linearisability as a correctness condition for verifying persistent algorithms.
OpenMP is an industry-standard API for writing portable shared-memory parallel programs in C/C++/Fortran. Almost every mainstream compiler of these languages now supports compilation of OpenMP programs. However, we are not aware of any compiler framework which was designed from the ground up taking OpenMP semantics into account. Consequently, not all components of such frameworks are generally applicable (or conforming) to OpenMP parallel semantics. This half-day tutorial presents a new open-source source-to-source compiler framework called IMOP (IIT Madras OpenMP compiler), which addresses such limitations.
Each component in IMOP has been designed and implemented by taking OpenMP syntax and semantics into account. IMOP comprises of more than 154 kLOC in Java, and works on OpenMP C programs as its input. With its numerous unique features such as OpenMP-aware compilation, automatic generation of parallel variants of the serial data-flow passes, self-stabilization of program abstractions under program modifications, integration with the Z3 SMT solver, and so on, IMOP can significantly simplify the task of writing tools for program analysis, profiling, and optimizations. In this hands-on tutorial, we will teach the fundamentals and certain advanced concepts of IMOP to the participants, which can help them in the faster development of their research prototypes.
Setting up the Gigahorse framework development environment and related toolchains
Specifying simple program analyses
Implement analyses for known vulnerabilities such as reentrancy
Run these analyses at scale, and compare their results
Introduce basic analysis design considerations and their effect on precision, completeness and scalability
Necessary background: the tutorial will make as few assumptions as possible regarding the background of participants, especially relative to the blockchain and smart contracts. Necessary concepts of smart contract execution will be introduced in the tutorial, although the emphasis will be on static analysis. Participants should have some background in intermediate languages and simple program analysis, at the level of a Compilers course.
Medium: There will be an initial presentation of tutorial material (slides + screen sharing for command line and setup). Afterwards, the tutorial is expected to be interactive, with extensive screen sharing among participants to jointly examine code.
Platform: Participants should have machines with a Unix-like OS (Linux preferred, MacOS should be ok). The Souffle language will be ideally installed and tested before the tutorial.