# Real-Time Probabilistic Programming
**Authors**: Lars Hummelgren, Matthias Becker, and David Broman
> EECS and Digital Futures, KTH Royal Institute of Technology, Sweden {larshum, mabecker,
## Abstract
Complex cyber-physical systems interact in real-time and must consider both timing and uncertainty. Developing software for such systems is expensive and difficult, especially when modeling, inference, and real-time behavior must be developed from scratch. Recently, a new kind of language has emerged—called probabilistic programming languages (PPLs)—that simplify modeling and inference by separating the concerns between probabilistic modeling and inference algorithm implementation. However, these languages have primarily been designed for offline problems, not online real-time systems. In this paper, we combine PPLs and real-time programming primitives by introducing the concept of real-time probabilistic programming languages (RTPPL). We develop an RTPPL called ProbTime and demonstrate its usability on an automotive testbed performing indoor positioning and braking. Moreover, we study fundamental properties and design alternatives for runtime behavior, including a new fairness-guided approach that automatically optimizes the accuracy of a ProbTime system under schedulability constraints.
## I Introduction
Probabilistic programming [1, 2] is an emerging programming paradigm that enables simple and expressive probabilistic modeling of complex systems. Specifically, the key motivation for probabilistic programming languages (PPLs) is to separate the concerns between the model (the probabilistic program) and the inference algorithm. Such separation enables the system designer to focus on the modeling problem without the need to have deep knowledge of Bayesian inference algorithms, such as Sequential Monte Carlo (SMC) [3, 4] or Markov chain Monte Carlo (MCMC) [5] methods.
There are many research PPLs, including Pyro [6], WebPPL [7], Stan [8], Anglican [9], Turing [10], Gen [11], and Miking CorePPL [12]. These PPLs focus on offline inference problems, such as data cleaning [13], phylogenetics [14], computer vision [15], or cognitive science [16], where time and timing properties are not explicitly part of the programming model. Likewise, many languages and environments exist for programming real-time systems [17, 18, 19] with no built-in support for probabilistic modeling and inference. Although some recent work exists on using probabilistic programming for reactive synchronous systems [20, 21] and for continuous time systems [22], no existing work combines probabilistic programming with real-time programming, where both inference and timing constructs are first-class.
Combining probabilistic programming with real-time programming results in several significant research challenges. Specifically, we identify two key challenges: (i) how to incorporate language constructs for both timing and probabilistic reasoning in a sound manner, and (ii) how to fairly distribute resources among systems of tasks while considering real-time constraints and inference accuracy requirements.
In this paper, we introduce a new kind of language that we call real-time probabilistic programming languages (RTPPLs). In this paradigm, users can focus on the modeling aspect of inferring unknown parameters of the application’s problem space without knowing details of how timing aspects or inference algorithms are implemented. We motivate the ideas behind this new kind of language and outline the main design challenges. To demonstrate the concepts of RTPPL, we develop an RTPPL called ProbTime, which includes probabilistic programming primitives, timing primitives, and primitives for designing modular task-based real-time systems. We create a compiler toolchain for ProbTime and demonstrate how it can be efficiently implemented in an automotive positioning and braking case study. A key aspect of our design is real-time inference using Sequential Monte-Carlo (SMC) and the automatic configuration built on top of this. We have implemented the compiler within the Miking framework [23] as part of the Miking CorePPL effort [12].
⬇
2 let x = assume (Beta 2.0 2.0) in \ label {lst: ppl - code: assume}
3 observe true (Bernoulli x); \ label {lst: ppl - code: observe}
4 ...\ label {lst: ppl - code: dots}
5 x \ label {lst: ppl - code: result}
(a) Line LABEL:lst:ppl-code:assume defines the latent variable x and gives the prior Beta(2,2). Line LABEL:lst:ppl-code:observe shows an observation statement (observing a true value), and line LABEL:lst:ppl-code:result states that the posterior for x should be computed.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Histogram: Frequency Distribution of Data Points
### Overview
The image displays a histogram with a bell-shaped distribution, indicating a normal distribution of data points. The x-axis represents data values ranging from 0.00 to 0.08, while the y-axis represents frequency or probability density, also ranging from 0.00 to 0.08. The bars are uniformly colored in blue, with no explicit legend or title provided.
### Components/Axes
- **X-axis**: Labeled with increments of 0.01 (0.00, 0.01, ..., 0.08). Represents the range of data values.
- **Y-axis**: Labeled with increments of 0.01 (0.00, 0.01, ..., 0.08). Represents frequency or probability density.
- **Bars**: Blue vertical bars with approximate heights corresponding to the frequency of data points at each x-axis interval.
### Detailed Analysis
- **Leftmost bar**: ~0.01 (x=0.00), height ~0.01.
- **Increasing trend**: Bars rise steadily from x=0.00 to x=0.04, with heights increasing from ~0.01 to ~0.075.
- **Peak**: Tallest bars centered at x=0.04, reaching ~0.075 on the y-axis.
- **Decreasing trend**: Bars decline symmetrically from x=0.04 to x=0.08, with heights decreasing from ~0.075 to ~0.01.
- **Rightmost bar**: ~0.01 (x=0.08), height ~0.01.
### Key Observations
1. **Symmetry**: The distribution is symmetric around x=0.04, consistent with a normal distribution.
2. **Peak frequency**: The highest frequency (~0.075) occurs at x=0.04, suggesting this is the mode of the data.
3. **Tapered tails**: Frequencies decrease gradually toward both ends of the x-axis, indicating diminishing probabilities for extreme values.
4. **No outliers**: No bars deviate significantly from the bell curve, suggesting no anomalous data points.
### Interpretation
The histogram likely represents a dataset with a mean and median centered at ~0.04, with values clustering tightly around this point. The symmetry and bell shape imply the data follows a Gaussian distribution, common in natural phenomena or measurement errors. The absence of a legend or title limits contextual interpretation, but the structure suggests this could model phenomena like:
- Measurement errors in scientific experiments
- Biological trait distributions (e.g., heights, weights)
- Financial returns adjusted for risk
The uniform bin width (0.01) ensures consistent resolution across the dataset. The y-axis scaling (up to 0.08) indicates the maximum frequency density, which could correspond to a probability density function if normalized. The lack of explicit units or context necessitates caution in applying this data to real-world scenarios without additional metadata.
</details>
(b) Prior distribution Beta(2,2) for latent variable x.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Bar Chart: Unlabeled Distribution
### Overview
The image depicts a bar chart with a bell-shaped distribution. The x-axis represents numerical values ranging from 0.00 to 0.09, while the y-axis contains 10 unlabeled categories. The bars are uniformly blue, and a legend confirming the color association is positioned on the right.
### Components/Axes
- **X-Axis**: Labeled with numerical values from 0.00 to 0.09 in increments of 0.01.
- **Y-Axis**: Contains 10 unlabeled categories (referred to as "Category 1" to "Category 10" for reference).
- **Legend**: Located in the top-right corner, associating the blue color with the data series.
### Detailed Analysis
- **Bar Heights**:
- **Category 1**: ~0.005
- **Category 2**: ~0.015
- **Category 3**: ~0.025
- **Category 4**: ~0.035
- **Category 5**: ~0.045
- **Category 6**: ~0.055
- **Category 7**: ~0.065
- **Category 8**: ~0.075
- **Category 9**: ~0.085
- **Category 10**: ~0.015
- **Trend**: The bars increase in height from Category 1 to Category 9, peaking at Category 9 (~0.085), then sharply decline to Category 10 (~0.015).
### Key Observations
1. **Symmetry**: The distribution is roughly symmetric around Category 9, with a gradual rise and fall.
2. **Peak Value**: The tallest bar (Category 9) reaches approximately 0.085, the highest value on the y-axis.
3. **Lowest Value**: The shortest bars (Categories 1 and 10) are near the baseline (~0.005–0.015).
4. **No Outliers**: All bars fall within the expected range, with no extreme deviations.
### Interpretation
The chart suggests a normal-like distribution centered around Category 9, with values tapering off symmetrically toward the edges. The absence of outliers and the bell shape imply a dataset with a central tendency and moderate variability. The unlabeled y-axis categories prevent direct interpretation of the data’s real-world context, but the numerical trend indicates a measurable pattern, such as frequency, probability, or intensity. The legend’s placement and color consistency confirm the data’s reliability.
</details>
(c) Posterior distribution for x, one true observation.
<details>
<summary>x3.png Details</summary>

### Visual Description
## Bar Chart: Frequency Distribution of Values
### Overview
The image displays a bar chart representing a frequency distribution of numerical values. The x-axis shows a range of values from 0.00 to 0.12, while the y-axis represents frequency (count) with values from 0.00 to 0.12. The bars are uniformly blue, and the distribution follows a bell-shaped curve, suggesting a normal distribution.
### Components/Axes
- **X-axis**: Labeled with numerical values from 0.00 to 0.12 in increments of 0.01.
- **Y-axis**: Labeled with frequency values from 0.00 to 0.12 in increments of 0.01.
- **Bars**: Blue-colored vertical bars representing frequency counts. No legend or explicit title is present.
### Detailed Analysis
- **Bar Heights**:
- The tallest bar is centered at approximately 0.06 on the x-axis, with a frequency of ~0.115.
- Bars decrease symmetrically on either side of the peak.
- Left tail (x < 0.06): Frequencies increase from ~0.001 (x=0.00) to ~0.03 (x=0.05).
- Right tail (x > 0.06): Frequencies decrease from ~0.03 (x=0.07) to ~0.001 (x=0.12).
- **Notable Values**:
- x=0.06: ~0.115 (peak).
- x=0.05 and x=0.07: ~0.08.
- x=0.04 and x=0.08: ~0.06.
- x=0.03 and x=0.09: ~0.04.
- x=0.02 and x=0.10: ~0.02.
- x=0.01 and x=0.11: ~0.01.
- x=0.00 and x=0.12: ~0.001.
### Key Observations
1. **Symmetry**: The distribution is approximately symmetric around x=0.06.
2. **Peak Frequency**: The highest frequency (~0.115) occurs at x=0.06.
3. **Tails**: Frequencies taper off sharply at the extremes (x=0.00 and x=0.12).
4. **No Outliers**: No bars deviate significantly from the bell curve.
### Interpretation
The chart depicts a normal distribution, where most data points cluster around the mean (x=0.06) with decreasing frequency toward the extremes. This pattern is typical in natural phenomena (e.g., measurement errors, biological traits). The absence of a legend or title limits contextual interpretation, but the symmetry and bell shape strongly suggest a Gaussian distribution. The y-axis values (frequencies) are unusually high (up to 0.12), which may indicate a non-standardized scale or a mislabeling of the axis (e.g., frequency vs. probability density). Further clarification on the data source or units would improve accuracy.
</details>
(d) Posterior distri-bution, observations [T, F, F, T, T]
Figure 1: A simple CorePPL program modeling a coin flip. Fig. (1(a)) shows the source code, and Fig. (1(b)) the prior distribution. Fig. (1(c)) gives the posterior distribution after one coin flip, and Fig. (1(d)) the posterior after a sequence of 5 coin flips.
In summary, we make the following contributions:
- We introduce the new paradigm of real-time probabilistic programming languages (RTPPL), motivate why it is essential, and outline significant design challenges (Sec. II).
- We design and implement ProbTime, a language within the domain of RTPPL (Sec. III), and specify its formal behavior (Sec. IV).
- We present a novel automated offline configuration approach that maximizes inference accuracy under timing constraints. Specifically, we introduce the concepts of execution-time and particle fairness within the context of real-time probabilistic programming (Sec. V) and study how they can be incorporated in an automated configuration setting (Sec. VI).
- We develop an automotive testbed including both hardware and software. As part of our case study, we demonstrate how a non-trivial ProbTime program can perform indoor positioning and braking in a real-time physical environment (Sec. VII).
## II Motivation and Challenges with RTPPL
This section introduces the main ideas of probabilistic programming, motivates why it is useful for real-time systems, and outlines some of the key challenges.
### II-A Probabilistic Programming Languages (PPLs)
Probabilistic programming is a rather recent programming paradigm that enables probabilistic modeling where the models can be described as Turing complete programs. Specifically, given a probabilistic program $f$ , observed data, and prior knowledge about the distribution of latent variables $\theta$ , a PPL execution environment can automatically approximate the posterior distribution of $\theta$ . Recall Bayes’ rule
$$
\displaystyle p(\theta,\textbf{data})=\frac{p(\textbf{data},\theta)p(\theta)}{
p(\textbf{data})} \tag{1}
$$
where $p(\theta,\textbf{data})$ is the posterior, $p(\textbf{data},\theta)$ the likelihood, $p(\theta)$ the prior, and $p(\textbf{data})$ the normalizing constant. Using a simple toy probabilistic program, we explain how standard PPL constructs relate to Bayes’ rule.
Consider Fig. 1(a), written in CorePPL [12], the core language that our work is based on. We use this coin-flip example to give an intuition of the fundamental constructs in a PPL. The model starts by defining a random variable x (line LABEL:lst:ppl-code:assume) using the assume construct. In this example, random variable x models the probability that a coin flip becomes true (we let true mean heads and false mean tails). For instance, if $p(\texttt{x})=0.5$ , the coin is fair, whereas if, e.g., $p(\texttt{x})=0.7$ , it is unfair and more likely to result in true when flipped.
The assume construct on line LABEL:lst:ppl-code:assume defines the random variable and gives its prior; in this case, the Beta distribution with both parameters equal to $2$ . If we sample from this prior, we get the Beta distribution depicted in Fig. 1(b). Note how the sample constructs in a probabilistic program (assume in CorePPL) correspond to the prior $p(\theta)$ in Bayes’ rule (Equation 1).
However, the goal of Bayesian inference is to estimate the posterior for a latent variable, given some observations. Line LABEL:lst:ppl-code:observe in the program shows an observe statement, where a coin flip of value true is observed according to the Bernoulli distribution. Note how the Bernoulli distribution’s parameter depends on the random variable x. Consider Fig. 1(c), which depicts the inferred posterior distribution given one observed true coin flip. As expected, the distribution shifts slightly to the right, meaning that given one sample, the coin is estimated to be somewhat biased toward true. Note also how observe statements in a probabilistic program correspond to the likelihood $p(\textbf{data},\theta)$ in Bayes’ rule. That is, the likelihood we observe specific data, given a certain latent variable $\theta$ .
If we extend Fig. 1(a) with in total five observations [true, false, false, true, true] in a sequence (illustrated with ... on line LABEL:lst:ppl-code:dots), the resulting posterior looks like Fig. 1(d). We can note the following: (i) the resulting mean is still slightly biased toward true (we have observed one more true value), and (ii) the variance is lower (the peak is slightly thinner). The latter case is a direct consequence of Bayesian inference and one of its key benefits: given the prior and the observed data, the posterior correctly represents the uncertainty of the result, not just a point estimate.
There are many inference strategies that can be used for the inference in the toy example, such as sequential Monte Carlo (SMC) or Markov chain Monte Carlo (MCMC). In this work, we focus on SMC and one of its instances, also known as the particle filter [24] algorithm, because of its strength in this kind of application. The intuition of particle filters is that inference is done by executing the probabilistic program (the model) multiple times (potentially in parallel). Each such execution point is called a particle and consists at the end of a weight (how likely the particle is) and the output values, where the set of all particles forms the posterior distribution. Intuitively, the more particles, the better the accuracy of the approximate inference. Many more details of the algorithm (e.g., resampling strategies) are outside this paper’s scope, and an interested reader is referred to this technical overview by Naesseth et al. [25]. Although hiding the details of the inference algorithm is one of the key ideas of PPLs (separating the model from inference), the number of particles is essential for automatic inference and scheduling. Hence, this is a key concept in our work and will be discussed further in this paper.
### II-B Real-Time Probabilistic Programming (RTPPL)
⬇
46 system {
47 sensor speed1 : Float rate 50 ms ///\label{lst:model:sa1}
48 sensor speed2 : Float rate 100 ms
49 actuator brake : Float rate 500 ms ///\label{lst:model:sa2}
50 task speedEst1 = Speed (250 ms) importance 1 ///\label{lst:model:tasks1}
51 task speedEst2 = Speed (500 ms) importance 1
52 task pos = Position () importance 3
53 task braking = Brake (300 ms) importance 4 ///\label{lst:model:tasks2}
54 speed1 -> speedEst1. in1 ///\label{lst:model:conn1}
55 speed2 -> speedEst2. in1
56 speedEst1. out -> pos. in1 ///\label{lst:model:s1conn1}
57 speedEst1. out -> braking. in1 ///\label{lst:model:s1conn2}
58 speedEst2. out -> pos. in2
59 speedEst2. out -> braking. in2
60 pos. out -> braking. in3
61 braking. out -> brake } ///\label{lst:model:conn2}
(a) The system declaration of a ProbTime program.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Flowchart: Vehicle Braking Control System
### Overview
The diagram illustrates a control system for vehicle braking, integrating speed estimation, positional data, and braking logic. It features two speed sensors (speed1, speed2), two speed estimators (speedEst1, speedEst2), a position calculator (pos), a braking logic module (braking), and a brake actuator (brake). Data flows through input/output connections (in1, in2, in3, out) between components.
### Components/Axes
- **Nodes**:
- `speed1`, `speed2` (input sensors)
- `speedEst1`, `speedEst2` (speed estimation modules)
- `pos` (position calculator)
- `braking` (braking logic)
- `brake` (output actuator)
- **Connections**:
- Arrows represent data flow (e.g., `speed1 → speedEst1 → pos`).
- Inputs/outputs labeled as `in1`, `in2`, `in3`, `out`.
### Detailed Analysis
1. **Speed Estimation**:
- `speed1` and `speed2` feed into `speedEst1` and `speedEst2`, respectively.
- Both estimators output to `pos` (via `out` connections).
2. **Position Calculation**:
- `pos` receives inputs from `speedEst1` (`in1`) and `speedEst2` (`in2`).
- Outputs to `braking` (`in3`).
3. **Braking Logic**:
- `braking` integrates `pos` (`in1`), `speedEst2` (`in2`), and `speedEst1` (`in3`).
- Outputs to `brake` (`out`).
### Key Observations
- Redundancy: Two speed estimators (`speedEst1`, `speedEst2`) suggest fault tolerance or multi-sensor fusion.
- Multi-input braking logic: `braking` combines three data streams (`pos`, `speedEst1`, `speedEst2`) for decision-making.
- Final output: `brake` is activated only after processing all upstream data.
### Interpretation
This system likely implements a safety-critical braking mechanism, such as in autonomous vehicles or advanced driver-assistance systems (ADAS). The dual speed estimators may cross-validate data to reduce errors, while the `pos` module integrates spatial awareness. The `braking` component’s three inputs imply a complex decision tree (e.g., speed thresholds, positional context, and sensor fusion). The absence of feedback loops suggests a linear, deterministic workflow, prioritizing real-time reliability over adaptive learning. Potential improvements could include feedback mechanisms or probabilistic risk assessment in the `braking` logic.
</details>
(b) A graphical representation of the system defined in Fig. 2(a). The system consists of two sensors, speed1 and speed2 on the left-hand side, four tasks speedEst1, speedEst2, pos, and braking in the middle, and an actuator brake on the right-hand side. Lines represent connections, where the white box represents the output port and the incoming arrow represents the input port.
Figure 2: Example system of a ProbTime program in textual (2(a)) and graphical form (2(b)).
The toy example of a coin flip in the previous example gives an intuition of probabilistic programming but does not show its power in large-scale applications. Probabilistic programming is used in several domains today, but surprisingly, little has been done in the context of time-critical applications. Although a rich literature exists on using Bayesian inference and filtering algorithms (e.g., particle filters in aircraft positioning [24]), these algorithms are typically hand-coded where both the probabilistic model and the inference algorithm are combined. This means that a developer needs to have deep insights into three different fields: (i) probabilistic modeling, (ii) Bayesian inference algorithms, and (iii) real-time programming aspects. Standard probabilistic programming is motivated by the appealing argument of separating (i) and (ii), thus enabling probabilistic modeling by developers without deep knowledge of how to implement inference efficiently and correctly. However, so far, probabilistic programming has not been combined with (iii) real-time aspects, such as timing, scheduling, and worst-case execution time estimation. This paper aims to make the first step towards mitigating this gap. We call this approach real-time probabilistic programming (RTPPL).
How can an RTPPL be designed? For instance, we can extend an existing PPL with timing semantics, extend an existing real-time language with PPL constructs, or create a new domain-specific language (DSL) including only a minimal number of constructs for reasoning about timing and probabilistic inference. In this research work, we choose the latter to avoid complexity when extending large existing languages. In particular, our purpose is to create this rather small research language to study the fundamentals of this new kind of language category. We identify and examine the following key challenges with RTPPLs:
- Challenge A: In an RTPPL, how can we encode real-time properties (deadlines, periodicity, etc.) and probabilistic constructs (sampling, observations, etc.) without forcing the user to specify low-level details such as particle counts and worst-case execution times?
- Challenge B: How can a runtime environment be constructed, such that a fair amount of time is spent on different tasks, giving the right number of particles for each task (for accuracy), where the system is still known to be statically schedulable?
We first address Challenge A (Sec. III - IV) by designing a new minimalistic research DSL called ProbTime and defining its formal behavior. Using the new DSL, we address Challenge B by studying two new concepts: execution-time fairness and particle fairness (Sec. V), and how particle counts, execution-time measurements, and scheduling can be performed automatically to get a fair and working configuration (Sec. VI).
## III ProbTime - an RTPPL
In this section, we introduce a new real-time probabilistic programming language called ProbTime. We present the timing and probabilistic constructs in the language using a small example, followed by an overview of the compiler implementation. ProbTime is open-source and publically available https://github.com/miking-lang/ProbTime.
### III-A System Declaration
We present a ProbTime system declaration implementing automated braking for a train in Fig. 2. The system keeps track of the train’s position and uses the automatic brakes before reaching the end of the track. The train provides observation data via two speed sensors with varying frequencies and accuracy (left side of Fig. 2(b)), and the system can activate the brakes via an actuator (right side of Fig. 2(b)). In Fig. 2(a), we first declare the sensors and actuators and associate them with a type to indicate what kind of data they operate on (lines LABEL:lst:model:sa1 - LABEL:lst:model:sa2). We also specify the rate at which data is provided by sensors and consumed by actuators. The squares in the graphical representation correspond to the tasks of our system. We instantiate the tasks on lines LABEL:lst:model:tasks1 - LABEL:lst:model:tasks2. Note that speedEst1 and speedEst2 are instantiated from the same template.
The importance construct captures the relative importance of tasks. We use the importance value to indicate how important the quality of inference performed by a task is. It is not to be confused with the task priority. As we saw in the coin flip example (Sec. II), the number of particles used in inference relates to the quality of the estimate. In the train example, we use importance on lines LABEL:lst:model:tasks1 - LABEL:lst:model:tasks2 to indicate that pos and braking should run more particles than speedEst1 and speedEst2. We discuss importance in more detail in Sec. V.
On lines LABEL:lst:model:conn1 - LABEL:lst:model:conn2 in Fig. 2(a), we declare the connections using arrow syntax a -> b, specifying that data written to output port a is delivered to input port b. Names of sensors and actuators represent ports and x.y refers to port y of task x.
### III-B Templates and Timing Constructs
Continuing with the train braking example, we outline the definitions of the task templates Speed and Position in Listing 1. Consider the definition of the Speed template (lines LABEL:lst:model:speed1 - LABEL:lst:model:speed2). We declare the input port in1 and the output port out on lines LABEL:lst:model:speedin1 - LABEL:lst:model:speedout, and annotate them with types.
We control timing in ProbTime using the iterative periodic block statement. A periodic block is a loop where each iteration starts after a statically known delay. Each iteration constitutes a task instance. On lines LABEL:lst:model:speed-periodic - LABEL:lst:model:speed2, we define a periodic block whose period is determined by the period argument of the Speed template. For instance, if period is $1\text{\,}\mathrm{s}$ , the release time is one second after the release time of the previous iteration. Our approach to timing is similar to the logical execution time paradigm [26]. However, while all timestamps are absolute under the hood, we only expose a relative view of the logical time of the current task instance.
The read statement retrieves a sequence of inputs available from an input port at the logical time of the task instance. On line LABEL:lst:model:speed-read, we retrieve inputs from port in1 and store them in the variable obs. Similarly, the write statement (line LABEL:lst:model:speed-write) sends a message with a given payload to the specified output port.
In Position (lines LABEL:lst:model:pos1 - LABEL:lst:model:pos2), we use our prior belief of the train’s position when estimating its current position. However, variables in ProbTime are immutable by default. We use update d in the periodic block on line LABEL:lst:model:pos-upd to indicate that updates to d (the position distribution) should be visible in subsequent iterations. Also, in the write on line LABEL:lst:model:pos-write, we specify an offset to the timestamp of the message relative to the current logical time of the task (it is zero by default).
Listing 1: Definition of the Speed and Position task templates.
⬇
20 template Speed (period : Int) {///\label{lst:model:speed1}
21 input in1 : Float ///\label{lst:model:speedin1}
22 output out : Dist (Float) ///\label{lst:model:speedout}
23 periodic period {///\label{lst:model:speed-periodic}
24 read in1 to obs ///\label{lst:model:speed-read}
25 infer speedModel (obs) to d ///\label{lst:model:speed-infer}
26 write d to out } } ///\label{lst:model:speed-write} \label{lst:model:speed2}
27 template Position () {///\label{lst:model:pos1}
28 input in1 : Dist (Float)
29 input in2 : Dist (Float)
30 output out : Dist (Float)
31 infer initPos () to d
32 periodic 1 s update d {///\label{lst:model:pos-upd}
33 read in1 to speed1
34 read in2 to speed2
35 infer positionModel (d, speed1, speed2) to d
36 write d to out offset 500 ms } } ///\label{lst:model:pos-write} \label{lst:model:pos2}
### III-C Models and Probabilistic Constructs
We define three probabilistic constructs in ProbTime, similar to what is used in other PPLs: sample, observe, and infer. We use sample and observe when defining probabilistic models, and we use the infer construct to produce a distribution from a probabilistic model in task templates.
Listing 2: Probabilistic model for estimating the train’s speed.
⬇
10 model speedModel (obs : [TSV (Float)]) : Float {///\label{lst:model:model1}
11 sample b $\sim$ Uniform (0.0, maxSpeed) ///\label{lst:model:intercept}
12 sample m $\sim$ Gaussian (0.0, 1.0)
13 sample sigma $\sim$ Gamma (1.0, 1.0) ///\label{lst:model:sigma}
14 for o in obs {///\label{lst:model:for1}
15 var ts = timestamp (o) ///\label{lst:model:to-timestamp}
16 var x = intToFloat (ts) / intToFloat (1 s) ///\label{lst:model:tstofloat}
17 var y = m * x + b
18 observe value (o) $\sim$ Gaussian (y, sigma) } ///\label{lst:model:observe} \label{lst:model:for2}
19 return b } ///\label{lst:model:model-ret} \label{lst:model:model2}
Consider the probabilistic model defined using the function speedModel in Listing 2. The function models the speed of the train at the current logical time using Bayesian linear regression. The parameter obs is a sequence of speed observations encoded as a sequence of floating-point messages (we use TSV as a short-hand for timestamped values).
In this model, we use a sample statement to sample a random slope and intercept of a line, assuming acceleration is constant. There are three latent variables (defined on lines LABEL:lst:model:intercept - LABEL:lst:model:sigma): b (the offset of the line), m (the slope of the line), and sigma (the variance, approximating the uncertainty of the estimate). The estimated line can be seen as a function of the speed over (relative) time, where $0 0$ is the logical time of the task instance.
In the for-loop on lines LABEL:lst:model:for1 - LABEL:lst:model:for2, we update our belief based on the speed observations. First, we translate the relative timestamp of the observation o to a floating-point number, representing our x-value (lines LABEL:lst:model:to-timestamp - LABEL:lst:model:tstofloat). We use the observe statement on line LABEL:lst:model:observe to update our belief. The observed value is the data of o (retrieved using value(o)), and we expect observations to be distributed according to a Gaussian distribution centered around the y-value. We return the train’s estimated speed (the intercept of the line) on line LABEL:lst:model:model-ret. Figure 3 presents a sample plot of what this looks like when the train accelerates. As we use relative timestamps in ProbTime, we can define this rather complicated model succinctly.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Line Graph: Speed vs. Time Relationship
### Overview
The image depicts a line graph illustrating the relationship between time (in seconds) and speed (in meters per second). A blue trend line is plotted alongside red data points, with shaded blue lines indicating variability or uncertainty. A vertical dashed line at x = 0 divides the graph into two regions.
### Components/Axes
- **X-axis (Horizontal)**: Labeled "time (s)" with values ranging from -0.6 to 0.1 seconds.
- **Y-axis (Vertical)**: Labeled "speed (m/s)" with values ranging from 0.0 to 0.6 m/s.
- **Legend**: Not explicitly labeled, but the blue line represents the trend line, and red points denote observed data.
- **Vertical Dashed Line**: Positioned at x = 0, dividing the graph into negative and positive time regions.
### Detailed Analysis
- **Data Points (Red Dots)**:
- (-0.5 s, 0.0 m/s)
- (-0.4 s, 0.1 m/s)
- (-0.3 s, 0.2 m/s)
- (-0.2 s, 0.3 m/s)
- (-0.1 s, 0.4 m/s)
- **Trend Line (Blue Line)**:
- A straight line with a positive slope, passing through all red data points.
- Equation: $ y = x + 0.5 $ (approximate, based on slope and intercept).
- **Shaded Blue Lines**:
- Multiple faint blue lines parallel to the trend line, suggesting variability or confidence intervals.
- No explicit labels or numerical values provided for these lines.
### Key Observations
1. **Linear Relationship**: The red data points align closely with the blue trend line, indicating a strong linear correlation between time and speed.
2. **Positive Slope**: Speed increases by approximately 1 m/s for every 1 second increase in time (slope = 1).
3. **Vertical Reference**: The dashed line at x = 0 may represent a critical time threshold (e.g., start/end of an event).
4. **Uncertainty**: Shaded lines imply variability, but their exact meaning (e.g., confidence intervals, error margins) is not specified.
### Interpretation
The graph demonstrates a consistent linear increase in speed over time, with data points perfectly aligned to the trend line. This suggests a deterministic relationship, possibly in a controlled experiment or theoretical model. The vertical dashed line at x = 0 could mark a pivotal moment (e.g., initiation of motion). The shaded lines introduce ambiguity about measurement precision or external factors affecting the relationship. Without additional context, the exact nature of the variability remains unclear, but the overall trend is unambiguous.
</details>
Figure 3: Outcome of the speed model for the speedEst2 task on synthetic data. The x-axis represents time in seconds, and the y-axis represents the train’s speed in meters per second (we seek the speed at the current time, $x=0.0$ ). The red dots represent speed observations, and the blue lines are estimates from the model (slope and intercept), where the opacity of a line indicates its likelihood.
<details>
<summary>x6.png Details</summary>

### Visual Description
## Flowchart: ProbTime Program Compilation and Execution Workflow
### Overview
The flowchart illustrates a multi-stage process for compiling and executing a "ProbTime Program" into executable code. It includes validation, AST (Abstract Syntax Tree) transformations, compilation steps, and system integration. Key components are labeled with sequential numbers (1–5) and directional arrows indicating workflow.
---
### Components/Axes
1. **Process Steps**:
- **ProbTime Program** (start node, gray box)
- **Parser** (step 1, blue box)
- **ProbTime AST** (gray box, intermediate output)
- **Validator** (step 2, blue box)
- **ProbTime Runtime** (gray box, system integration)
- **System Declaration** (gray box, external dependency)
- **Extractor** (step 3, blue box)
- **ProbTime Task ASTs** (gray box, intermediate output)
- **Backend Compiler** (step 4, blue box)
- **CorePPL ASTs** (gray box, intermediate output)
- **CorePPL Compiler** (step 5, blue box)
- **ProbTime Executables** (end node, gray box)
2. **Flow Arrows**:
- Solid arrows indicate sequential progression.
- Dashed box highlights a sub-process (steps 3–5).
- Feedback loop from **Validator** back to **ProbTime AST**.
3. **System Declaration**:
- Positioned as an external input influencing the workflow.
---
### Detailed Analysis
1. **Step 1**:
- **Parser** converts the **ProbTime Program** into a **ProbTime AST** (Abstract Syntax Tree).
- *Spatial grounding*: Parser (blue box) is directly connected to ProbTime AST (gray box).
2. **Step 2**:
- **Validator** checks the **ProbTime AST** for correctness.
- Feedback loop ensures errors are resolved before proceeding.
- *Spatial grounding*: Validator (blue box) connects back to ProbTime AST (gray box).
3. **Step 3**:
- **Extractor** processes the validated **ProbTime AST** into **ProbTime Task ASTs**.
- *Spatial grounding*: Extractor (blue box) links to ProbTime Task ASTs (gray box).
4. **Step 4**:
- **Backend Compiler** compiles **ProbTime Task ASTs** into **CorePPL ASTs**.
- *Spatial grounding*: Backend Compiler (blue box) connects to CorePPL ASTs (gray box).
5. **Step 5**:
- **CorePPL Compiler** generates **ProbTime Executables** from **CorePPL ASTs**.
- *Spatial grounding*: CorePPL Compiler (blue box) points to ProbTime Executables (gray box).
---
### Key Observations
- **Feedback Mechanism**: The Validator’s output loops back to the ProbTime AST, emphasizing iterative refinement.
- **Sub-Process Highlight**: Steps 3–5 (Extractor → Backend Compiler → CorePPL Compiler) are grouped in a dashed box, indicating a nested compilation phase.
- **System Integration**: **ProbTime Runtime** and **System Declaration** are external dependencies, suggesting runtime environment requirements.
---
### Interpretation
This flowchart represents a structured compilation pipeline for probabilistic time-based programs. The **Parser** and **Validator** ensure code correctness early in the process, while the **Extractor** and **Backend Compiler** handle AST transformations. The **CorePPL Compiler** bridges high-level logic to executable code, with **ProbTime Executables** as the final output. The feedback loop between Validator and ProbTime AST highlights the importance of error correction, while the dashed sub-process emphasizes modularity in the compilation stages. The inclusion of **System Declaration** suggests dependencies on external runtime configurations, critical for execution.
</details>
Figure 4: An overview of the ProbTime compiler. Gray square boxes represent artifacts, and rounded blue squares represent compiler transformations (numbered by the order in which they run). We use the pre-existing CorePPL compiler to produce executable code in the dashed area.
Finally, recall Listing 1 at line LABEL:lst:model:speed-infer, where we use the infer statement to infer the posterior using the probabilistic model (in this case speedModel).
### III-D Compiler Implementation
Fig. 4 depicts the key steps of the ProbTime compiler. The compiler is implemented within the Miking framework [23] because of its good support for defining domain-specific languages and the possibility of extending the Miking CorePPL [12] framework with real-time constructs.
A key step of the compilation is the extractor (3 in Fig. 4), which produces one abstract syntax tree (AST) for each task declared in the ProbTime AST. We combine this AST with a runtime AST, which, for instance, implements infer and delay. We individually pass the task ASTs to the backend compiler (4), translating them to CorePPL ASTs, which are compiled into executables using the existing CorePPL compiler (5). Our implementation consists of 4400 lines of code.
## IV System Model and Formal Specification
This section presents a formal specification of ProbTime on a system level (Sec. IV-A), including a system model. We also consider ProbTime on a task level (Sec. IV-B), focusing on communication and the probabilistic constructs.
### IV-A System-Level Specification
We define a ProbTime system as a tuple $(\mathcal{T},\mathcal{S},\mathcal{A},\mathcal{C})$ , where $\mathcal{T}$ is a set of tasks, $\mathcal{S}$ is a set of sensors, $\mathcal{A}$ is a set of actuators, and $\mathcal{C}$ is a set of connections. A task $\tau$ is represented by the tuple $(C,T,\pi,n,v)$ , where $C$ is its worst-case execution time (WCET), $T$ its period, $\pi$ its static priority, $n$ the number of particles, and $v$ its assigned importance value. Tasks are subject to an implicit deadline $D=T$ . We assume the periodic block of each task $\tau$ contains at most one use of infer, for which we set the number of particles $n_{\tau}$ .
We assume the tasks of $\mathcal{T}$ are ordered by increasing period. The tasks execute on an exclusive subset of the cores available on a platform. Specifically, if the platform has $m$ cores, we schedule our tasks on a subset $\mathcal{U}$ of cores, where $k<m$ (we reserve one core for the platform). Each task $\tau$ is statically assigned a core $c\in\mathcal{U}$ . We use partitioned fixed-priority scheduling on the cores $\mathcal{U}$ . Priorities are assigned on the rate monotonic principle [27], where the task $\tau$ with the lowest period $T_{\tau}$ has the highest priority $\pi_{\tau}$ . The tasks are preemptive.
Each task $\tau$ has a set of input ports $I_{\tau}$ from which it can receive data and a set of output ports $O_{\tau}$ to which it can send data. Sensors act as output ports, and actuators as input ports. A ProbTime system has a set of input ports $I=\mathcal{A}\cup\bigcup_{\tau\in\mathcal{T}}I_{\tau}$ and a set of output ports $O=\mathcal{S}\cup\bigcup_{\tau\in\mathcal{T}}O_{\tau}$ . The set $\mathcal{C}$ contains connections $c_{i}=(o_{j},i_{k}),o_{j}\in O\land i_{k}\in I$ . We say that a task $\tau$ is a predecessor of $\tau^{\prime}$ if $\exists(o,i)\in\mathcal{C}.o\in O_{\tau}\land i\in I_{\tau^{\prime}}$ .
Every port $p\in I\cup O$ of a task has an associated buffer $B_{p}$ storing incoming or outgoing messages. Given the particle counts $n_{\tau}$ of all tasks $\tau$ , we can determine the message size. The compiler counts the number of times a task writes to each port when this can be determined at compile-time (otherwise, the compiler produces a warning). Based on these counts and the declared rate of sensors and actuators, the automatic configuration (Sec. VI) verifies that a given (fixed) buffer size is sufficient for the messages passing through all ports.
### IV-B Task-Level Specification
We present the behavior of a task $\tau$ in Algorithm 1. The RunTaskInstance function (lines 1 - 6) is called at the start of the periodic block. It starts with an absolute delay until the next period, increasing the logical time [28] of the task by $T_{\tau}$ (line 3). The task $\tau$ reads all data from input ports $i\in I_{\tau}$ to buffers $B_{i}$ (using ReadNonBlocking on line 4), runs an iteration of the periodic block (such as lines LABEL:lst:model:speed-read - LABEL:lst:model:speed-write in Listing 1) in RunIteration (line 5), and writes messages stored in output buffers $B_{o}$ to their output ports $o\in O_{\tau}$ (using WriteNonBlocking on line 6).
Reads and writes in a ProbTime task operate on the buffers rather than directly on the ports. The statement read i to x results in $\texttt{x}\leftarrow\textsc{ReadInputPort}(\texttt{i})$ . Similarly, write v to o offset d results in $\textsc{WriteOutputPort}(\texttt{v},\texttt{o},\texttt{d})$ , which concatenates a message with payload v and timestamp equal to the current logical time of the task ( $t_{0}$ ) plus an offset (d) to the buffer $B_{o}$ . The reads from and writes to the ports are non-blocking, so we have no precedence relationships between tasks, and communication has no impact on scheduling.
Algorithm 1 Definition of the behavior of the task runtime.
1: function RunTaskInstance τ ()
2: while true do
3: $\textsc{DelayUntil}(T_{\tau})$
4: for $i\in I_{\tau}$ do $B_{i}\leftarrow\textsc{ReadNonBlocking}(i)$
5: $\textsc{RunIteration}(\tau)$
6: for $o\in O_{\tau}$ do $\textsc{WriteNonBlocking}(o,B_{o})$
7: function ReadInputPort ( $i$ )
8: return $B_{i}$
9: function WriteOutputPort ( $v,o,d$ )
10: $B_{o}\leftarrow B_{o},(v,t_{0}+d)$
Algorithm 2 Algorithmic definition of the formal behavior of probabilistic constructs within a task.
1: function Infer τ ( $m,x_{1},\ldots,x_{n}$ )
2: for $i\in 1\ldots n_{\tau}$ do
3: $v_{i},w_{i}\leftarrow\textsc{RunParticle}(m,x_{1},\ldots,x_{n})$
4: return $(v_{1},w_{1}),\ldots,(v_{n_{\tau}},w_{n_{\tau}})$
5: function RunParticle ( $m,x_{1},\ldots,x_{n}$ )
6: $w\leftarrow 0$
7: $v\leftarrow m_{w}(x_{1},\ldots,x_{n})$
8: return $v,w$
9: function UpdateWeight ( $w,o,d$ )
10: return $w+\textsc{LogObserve}(o,d)$
11: function LogObserve ( $o,d$ )
12: if $\textsc{Elementary}(d)$ then return $\text{logPdf}_{d}(o)$
13: else error
14: function Sample ( $d$ )
15: if $\textsc{Elementary}(d)$ then return $\text{sample}_{d}()$
16: else
17: $r\leftarrow\textsc{SampleUniform}(0,c_{|d|})$
18: $j\leftarrow\textsc{LowerBound}(r,c)$
19: return $v_{j}$
We present the formal behavior of the probabilistic constructs infer, observe, and sample in Algorithm 2. Consider the definition of inference for a task $\tau$ in the Infer τ function (lines 1 - 4). We perform inference by running a fixed number of particles $n_{\tau}$ , as determined by the automatic configuration (Sec. VI). Each particle $(v_{j},w_{j})$ consists of a value $v_{j}$ returned by the probabilistic model function $m$ and a weight $w_{j}$ determined by use of observe within the model. The sequence of particles returned by Infer τ (line 4) is the resulting distribution. The statement infer m (x, y) to d corresponds to $\texttt{d}\leftarrow$ Infer ${}_{\tau}(\texttt{m},\texttt{x},\texttt{y})$ in our pseudocode.
When we run a particle in the RunParticle function (lines 5 - 8), we use a weight $w$ , representing the accumulated log-likelihood of a particle having a particular value $v$ . Every statement observe o $\sim$ d modifies the weight as $w\leftarrow\textsc{UpdateWeight}(w,\texttt{o},\texttt{d})$ . We denote the model function as $m_{w}$ on line 7 to indicate that the model $m$ mutates the weight $w$ . For instance, if the model $m_{w}$ is speedModel of Listing 2, we run lines LABEL:lst:model:intercept - LABEL:lst:model:model-ret, and update $w$ for each use of observe on line LABEL:lst:model:observe. The LogObserve function (lines 11 - 13 in Algorithm 2) represents the computation of the logarithmic weight for an observe statement. ProbTime supports observations on elementary distributions (e.g., Gaussian) but not empirical distributions (produced by an infer). For an elementary distribution, we compute the weight using its logarithmic probability distribution function (line 12).
When we sample a random value as in sample x $\sim$ d, we assign a random value to the variable x from the distribution d. We define this in the Sample function (lines 14 - 19). For an elementary distribution, we use a known sampling function (line 15). For empirical distributions, we use the cumulative weights $c_{j}=\sum_{i=1}^{j}w_{i}$ to pick a random particle through a binary search (lines 17 - 19). Therefore, the time complexity of sampling is $O(\log n)$ for an empirical distribution of $n$ particles. As ProbTime supports sending empirical distributions between tasks, this complexity results in a dependency. A task $\tau_{i}$ that samples from a distribution sent by a predecessor $\tau_{j}$ has an increased execution time if we increase $n_{\tau_{j}}$ .
## V Fairness
In this section, we present new perspectives on fairness that emerge when combining real-time and probabilistic inference.
### V-A Execution-Time and Particle Fairness
We know that more particles improve the accuracy of the inference, but it is not obvious how the result of one task $\tau$ impacts the whole system’s behavior. As we concluded in Sec. IV, increasing the execution time or particle count of a task may affect the performance of other tasks. ProbTime allows the user to specify how important each task is. Specifically, we use the importance values $v_{\tau}$ associated with each task $\tau$ as a measure to guide fairness. But what is the right approach to fairness? We define and study fairness in two ways: (i) execution-time fairness, where we allocate execution time budgets $B_{\tau}$ (which the WCETs $C_{\tau}$ of each task $\tau$ must not exceed) to tasks proportional to their importance values, and (ii) particle fairness, where fairness between tasks is given in terms of the number of particles used in the inference.
Consider Fig. 5, where we present three scenarios to exemplify the differences between execution-time and particle fairness. Assume we have two tasks $\tau_{A}$ and $\tau_{B}$ such that $T_{\tau_{A}}=T_{\tau_{B}}$ and $v_{\tau_{A}}=2\cdot v_{\tau_{B}}$ , running on one core. The tasks are independent and they have the same execution time per particle. We present the outcome of the three scenarios (labeled 1, 2, and 3, respectively) for $\tau_{A}$ and $\tau_{B}$ concerning their relative ratios of execution time and particle count.
<details>
<summary>x7.png Details</summary>

### Visual Description
## Bar Chart: Execution Time and Particle Count Comparison Across Scenarios
### Overview
The chart compares execution times (represented by bar lengths) and particle counts (numerical values) for two components (A and B) across three scenarios (1, 2, 3), each with two sub-scenarios (E and P). Component A is consistently represented by orange bars, while B uses blue bars. Execution time is plotted on the x-axis, and scenarios/sub-scenarios are labeled on the y-axis.
---
### Components/Axes
- **X-axis**: "Execution time" (no numerical scale provided; bar lengths imply relative values).
- **Y-axis**: Scenarios (1, 2, 3) with sub-scenarios (E, P) listed vertically.
- **Legend**:
- Orange = Component A
- Blue = Component B
- **Particle Count Labels**: Numerical values for A and B are listed to the right of each scenario/sub-scenario pair.
---
### Detailed Analysis
#### Scenario 1
- **1E**:
- A: Execution time (long orange bar), Particle count = 2000
- B: Execution time (short blue bar), Particle count = 1000
- **1P**:
- A: Execution time (long orange bar), Particle count = 2000
- B: Execution time (short blue bar), Particle count = 1000
#### Scenario 2
- **2E**:
- A: Execution time (long orange bar), Particle count = 2000
- B: Execution time (short blue bar), Particle count = 250
- **2P**:
- A: Execution time (short orange bar), Particle count = 1000
- B: Execution time (long blue bar), Particle count = 500
#### Scenario 3
- **3E**:
- A: Execution time (long orange bar), Particle count = 500
- B: Execution time (short blue bar), Particle count = 1000
- **3P**:
- A: Execution time (very long orange bar), Particle count = 1200
- B: Execution time (short blue bar), Particle count = 600
---
### Key Observations
1. **Consistency in Scenario 1**: Both sub-scenarios (E/P) show identical particle counts (A: 2000, B: 1000) and execution time patterns (A > B).
2. **Scenario 2 Anomaly**:
- In 2P, B’s execution time exceeds A’s despite having half the particle count (500 vs. 1000).
3. **Scenario 3 Efficiency Shift**:
- 3E: B has double A’s particle count (1000 vs. 500) but shorter execution time.
- 3P: A’s execution time is longest despite only 1200 particles (vs. B’s 600).
---
### Interpretation
- **Component A** generally requires longer execution time but handles higher particle loads in most scenarios (e.g., 1E, 2E, 3P). However, in 2P, B’s execution time surpasses A’s despite lower particle counts, suggesting inefficiency or resource contention.
- **Component B** shows variable performance:
- In 2E, it processes fewer particles (250) but with minimal execution time.
- In 3E, it handles more particles (1000) than A (500) but with shorter execution time, indicating higher efficiency in this sub-scenario.
- **Scenario 3P** highlights a trade-off: A processes twice as many particles as B but takes significantly longer, potentially indicating scalability limitations.
The data suggests that component selection depends on the scenario’s requirements. For high-particle tasks (e.g., 1E, 3P), A is preferable despite longer execution. For scenarios prioritizing speed over particle count (e.g., 2E, 3E), B may be more efficient.
</details>
Figure 5: Illustration of execution time and particle counts for tasks $\tau_{A}$ and $\tau_{B}$ ( $\tau_{A}$ is twice as important as $\tau_{B}$ and both have the same period) for three scenarios when considering execution-time fairness (E) and particle fairness (P).
In scenario 1, we find in both cases that, because the tasks have the same execution time per particle, they both end up with the same ratio of execution time and particles.
In scenario 2, we assume task $\tau_{B}$ requires four times as much execution time as $\tau_{A}$ per particle. If we use execution-time fairness, the execution time ratio of the tasks remains the same, but $\tau_{B}$ only has time to produce $250$ particles. For particle fairness, the slowdown of $\tau_{B}$ results in both tasks running fewer particles to maintain a fair allocation of particle counts, skewing the execution time in favor of $\tau_{B}$ .
In scenario 3, we instead assume $\tau_{A}$ samples from a distribution sent from $\tau_{B}$ , meaning the execution time of $\tau_{A}$ depends on the particle count of $\tau_{B}$ . The exact outcome depends on the ratio of time $\tau_{A}$ spends on sampling. Using execution-time fairness, we find that $\tau_{A}$ only has time to produce $500$ particles. Task $\tau_{B}$ produces the same number of particles as in scenario 1 because it is independent of $\tau_{A}$ . In contrast, when using particle fairness, $\tau_{A}$ produces more particles, and $\tau_{B}$ produces fewer.
Which kind of fairness is preferred in a real-time probabilistic inference setting? Although execution-time fairness may seem natural from a real-time perspective, we will see (surprisingly) in the evaluation (Sec. VII-D) that particle fairness works better in practice.
Fairness is applicable in scenarios where the output quality is proportional to the time spent producing it. This paper focuses explicitly on SMC inference, where the number of particles controls the quality. However, we foresee no issues extending this to, e.g., MCMC inference algorithms by considering the number of runs instead of the particle count.
### V-B Computation Maximization
So far, we only considered the case where tasks run on one core. What if we want to apply fairness to tasks running on multiple cores? We consider two extremes: prioritizing fairness or prioritizing utilization. If we prioritize fairness, we allocate a fixed amount of resources (execution time or particles) to a task instance proportional to its importance value, regardless of which core the task is mapped to. We refer to this as fair utilization. If we prioritize utilization, we maximize the utilization on each core individually, ignoring the importance of tasks running on other cores. We refer to this as maximum utilization. In both cases, the resulting system is schedulable.
Assume we allocate the tasks listed on the left-hand side of Fig. 6 on a dual-core system (where $T_{i}$ is the period, $v_{i}$ is the importance, and $c$ is the core of task $\tau_{i}$ ). On the right-hand side, we present the result of applying the two alternatives using execution-time fairness (this also applies to particle fairness). Each box corresponds to a task instance ( $\tau_{D}$ has four times the frequency of $\tau_{A}$ ; hence, it has four boxes), and we consider importance values to apply per task instance. We see that $\tau_{A}$ and $\tau_{B}$ are allocated the same execution time, as $\tau_{A}$ is twice as important as $\tau_{B}$ , but $\tau_{B}$ runs twice as frequently.
The main takeaway from Fig. 6 is that, when prioritizing fair utilization, all cores are not always fully utilized. Recall that the tasks of the system may depend on each other. Increasing the execution times of $\tau_{C}$ and $\tau_{D}$ may negatively impact the performance of $\tau_{A}$ and $\tau_{B}$ . This is unfair, considering that $\tau_{A}$ and $\tau_{B}$ have higher importance. On the other hand, if the tasks are independent (which they are in this example), we are wasting computational resources. As an alternative, we also propose the concept of fair utilization maximization, where we start from fair utilization and gradually increase the resources (execution time or particles) allocated to tasks on cores that are not fully utilized as long as the system remains schedulable. If $\tau_{A}$ and $\tau_{B}$ do not depend on $\tau_{C}$ and $\tau_{D}$ , we get the same result as when prioritizing maximum utilization.
| $A$ $B$ $C$ | $1\text{\,}\mathrm{s}$ $0.5\text{\,}\mathrm{s}$ $1\text{\,}\mathrm{s}$ | 8 4 4 | 1 1 2 |
| --- | --- | --- | --- |
| $D$ | $0.25\text{\,}\mathrm{s}$ | 2 | 2 |
<details>
<summary>x8.png Details</summary>

### Visual Description
## Diagram: Core Utilization Comparison
### Overview
The image compares two utilization strategies ("Fair utilization" and "Maximum utilization") across two processor cores (Core #1 and Core #2). Each core is represented by horizontal bars divided into colored blocks, with a legend mapping colors to task types (A, B, C, D).
### Components/Axes
- **Legend**:
- Orange = Task A
- Blue = Task B
- Black = Task C
- Green = Task D
- **Axes**:
- **X-axis**: Task distribution per core (no explicit scale, inferred from block counts).
- **Y-axis**: Core labels ("Core #1" and "Core #2").
- **Key Elements**:
- Two sections: "Fair utilization" (top) and "Maximum utilization" (bottom).
- Spatial grounding: Legends are positioned to the right of each section.
### Detailed Analysis
#### Fair Utilization
- **Core #1**:
- 1 orange block (A)
- 2 blue blocks (B)
- **Core #2**:
- 1 black block (C)
- 4 green blocks (D)
#### Maximum Utilization
- **Core #1**:
- 1 orange block (A)
- 2 blue blocks (B)
- **Core #2**:
- 1 black block (C)
- 5 green blocks (D)
### Key Observations
1. **Task Consistency**:
- Tasks A, B, and C remain unchanged between the two utilization strategies.
- Task D increases from 4 to 5 blocks in Core #2 under "Maximum utilization."
2. **Core Allocation**:
- Core #1 is consistently underutilized (3 blocks total) in both strategies.
- Core #2 is fully utilized in "Maximum utilization" (6 blocks total).
### Interpretation
The diagrams illustrate a trade-off between fairness and efficiency in task allocation:
- **Fair utilization** prioritizes balanced task distribution, leaving Core #1 partially idle.
- **Maximum utilization** maximizes Core #2’s capacity by adding an extra Task D, suggesting optimization for performance at the cost of fairness.
- The unchanged allocation of Tasks A, B, and C implies these tasks may have fixed priorities or resource constraints.
- The increase in Task D in Core #2 under "Maximum utilization" highlights a focus on leveraging idle resources for higher throughput.
This analysis assumes tasks A, B, C, and D represent distinct workloads with varying resource demands, though the image does not explicitly define their computational costs.
</details>
Figure 6: The resulting utilization of the tasks (left) allocated on two cores when prioritizing fairness (top right) or maximizing the utilization (bottom right) in terms of execution time.
## VI Automatic Configuration
In this section, we present two approaches to automatic configuration that we apply to a compiled ProbTime system to maximize inference accuracy while preserving schedulability. Automatic configuration is performed before running the system so it does not introduce runtime overheads.
### VI-A Configuration Overview
We define two key phases needed to configure a ProbTime system: (i) the collection phase, where we record sensor inputs, and (ii) the configuration phase, where we determine the particle counts of all tasks. During phase (i), we collect input data from the sensors of the actual system, which we replay repeatedly during (ii) in a hardware-in-the-loop simulation. The configuration phase runs on the target system. This paper focuses on the configuration phase (ii).
We present two alternative approaches to automatic configuration based on execution-time fairness (Sec. VI-B) and particle fairness (Sec. VI-C). In both cases, we seek a particle count $n_{\tau}$ to use in the inference of each task $\tau$ based on its importance value $v_{\tau}$ , using fair utilization. For simplicity, we assume the user provides a task-to-core mapping $\mathcal{M}$ specifying on which core each task runs (such that $\mathcal{M}_{\tau}\in\mathcal{U}$ ). Also, we assume the existence of a function RunTasks that runs the tasks with pre-recorded sensor data to measure the WCET $W_{\tau}$ of each task $\tau$ . Finally, we use a configurable safety margin factor $\delta$ to protect against outliers in the WCET observations.
### VI-B Execution-Time Fairness
We perform two steps to achieve execution-time fairness. First, we compute fair execution time budgets $B_{\tau}$ for each task $\tau$ based on its importance $v_{\tau}$ , such that all tasks remain schedulable. Second, we maximize the particle counts while ensuring the WCET of each task $\tau$ is within its budget.
We define the ComputeBudgets function in Algorithm 3 (lines 1 - 6) to compute the execution time budgets of tasks with fair utilization. Our approach assumes partitioned scheduling, which means we can examine tasks assigned to different cores independently. For each core $c_{i}$ , we use the sensitivity analysis in the C-space of Bini et al. [29] to compute a maximum change $\lambda_{i}$ to the execution times of tasks for which they remain schedulable, given a direction vector $\mathbf{d}$ (initially, we assume all execution times are zero). We define $\mathbf{d}$ based on the importance values of the tasks (line 3) and pass it to the SensitivityAnalysis function to compute $\lambda_{i}$ (line 4). Then, we pick the minimum lambda among all cores (line 5) and use this to get fair execution time budgets for all tasks (line 6).
Algorithm 3 Function to determine the number of particles to use for execution-time fairness.
1: function ComputeBudgets ( $\mathcal{M},\mathcal{T}$ )
2: for $c_{i}\in\mathcal{U}$ do
3: $\mathbf{d}\leftarrow\{v_{\tau}\ |\ \tau\in\mathcal{T}\land\mathcal{M}_{\tau}=c _{i}\}$
4: $\lambda_{i}\leftarrow\textsc{SensitivityAnalysis}(\mathbf{d})$
5: $\lambda\leftarrow\min_{i}\lambda_{i}$
6: $\mathbf{return}\ \{(\tau,v_{\tau}\cdot\lambda)\ |\ \tau\in\mathcal{T}\}$
7: function Configure EF ( $\mathcal{M},\mathcal{T}$ )
8: $L,U\leftarrow\{(\tau,1)\ |\ \tau\in\mathcal{T}\},\{(\tau,\infty)\ |\ \tau\in \mathcal{T}\}$
9: $N\leftarrow\{(\tau,1)\ |\ \tau\in\mathcal{T}\}$
10: $A\leftarrow\{\tau\ |\ \tau\in\mathcal{T}\land P_{\tau}=\emptyset\}$
11: $B\leftarrow\textsc{ComputeBudgets}(\mathcal{M},\mathcal{T})$
12: $F\leftarrow\{\tau\ |\ \tau\in\mathcal{T}\land v_{\tau}=0\}$
13: while $A\neq\emptyset$ do
14: $W\leftarrow\textsc{RunTasks}(\mathcal{M},\mathcal{T})$
15: for $\tau\in A$ do
16: if $L_{\tau}+1<U_{\tau}$ then
17: $L_{\tau},U_{\tau}\leftarrow\begin{cases}L_{\tau},N_{\tau}&\text{if }\frac{W_{ \tau}}{\delta}>B_{\tau}\\ N_{\tau},U_{\tau}&\text{otherwise}\end{cases}$
18: $N_{\tau}\leftarrow\begin{cases}2\cdot L_{\tau}&\text{if }U_{\tau}=\infty\\ \lfloor\frac{L_{\tau}+U_{\tau}}{2}\rfloor&\text{otherwise}\end{cases}$
19: else $N_{\tau},F\leftarrow L_{\tau},F\cup\{\tau\}$
20: $A\leftarrow\{\tau\ |\ \tau\in\mathcal{T}\land P_{\tau}\subseteq F\}\setminus F$
21: $\mathbf{return}\ N$
Algorithm 4 Computing the number of particles to use for particle fairness.
1: function Schedulable ( $\mathcal{M},\mathcal{T},k$ )
2: for $\tau\in\mathcal{T}$ do $n_{\tau}\leftarrow\lfloor k\cdot\overline{v}_{\tau}\rfloor$
3: $W\leftarrow\textsc{RunTasks}(\mathcal{M},\mathcal{T})$
4: for $(\tau,W_{\tau})\in W$ do $C_{\tau}\leftarrow\frac{W_{\tau}}{\delta}$
5: $\textbf{return}\ \textsc{RTA}(\mathcal{M},\mathcal{T})$
6: function Configure PF ( $\mathcal{M},\mathcal{T}$ )
7: $L,U\leftarrow 1,\infty$
8: while $L+1<U$ do
9: $k\leftarrow\begin{cases}2\cdot L&\text{if }U=\infty\\ \lfloor\frac{L+U}{2}\rfloor&\text{otherwise}\end{cases}$
10: $L,U\leftarrow\begin{cases}k,U&\text{if }\textsc{Schedulable}(\mathcal{M}, \mathcal{T},k)\\ L,k&\text{otherwise}\end{cases}$
11: $\textbf{return}\ \{(\tau,\lfloor L\cdot\overline{v}_{\tau}\rfloor)\ |\ \tau\in \mathcal{T}\}$
We use the execution time budget $B_{\tau}$ as an upper bound for the WCET of the task $\tau$ while trying to maximize its particle count $n_{\tau}$ . We need to carefully adjust the particle counts of tasks, as they may depend on each other. Assume we have a maximum particle count $n_{\tau}$ for a task $\tau$ . Increasing the particle count of a predecessor task $\tau^{\prime}$ of $\tau$ causes the execution time of $\tau$ to increase. However, if we keep the particle count of all predecessors fixed, the WCET $C_{\tau}$ of a task $\tau$ is proportional to its particle count $n_{\tau}$ . This enables using binary search to find the maximum $n_{\tau}$ .
We use these observations in the Configure EF function of Algorithm 3, which returns the particle count $N_{\tau}$ to use for each task $\tau$ . We keep track of a set of active tasks $A$ with no active predecessors (initialized on line 10). We use $P_{\tau}$ to denote the set of predecessors of a task $\tau$ . Note on line 12 that the finished set $F$ is initialized to contain all tasks with importance zero (tasks $\tau$ that perform no inference are assumed to have $v_{\tau}=0$ ). While we have active tasks, we use the RunTasks function to measure the WCETs of all tasks (line 14). We keep track of lower and upper bounds, $L_{\tau}$ and $U_{\tau}$ , of the particle count for each task $\tau$ . For each active task $\tau$ , we compare its measured WCET $W_{\tau}$ to its budget $B_{\tau}$ and update the bounds accordingly (lines 16 - 19). When the binary search concludes, we set the particle count $N_{\tau}$ to the lower bound and add the task $\tau$ to the set of finished tasks $F$ (line 19). At the end of each iteration, we update the set of active tasks (line 20). When the active set $A$ is empty, the resulting particle counts $N$ are returned. Note that the algorithm assumes that the task dependency graph is acyclic.
### VI-C Particle Fairness
To achieve particle fairness, we seek a multiple $k$ such that $n_{\tau}=\lfloor k\cdot\overline{v}_{\tau}\rfloor$ for all tasks $\tau$ , where $\overline{v}_{\tau}$ is the normalized importance value . When we increase the value of $k$ , the execution time of all tasks increases, meaning we can binary search to find the maximum $k$ . For a given $k$ , we need to determine whether the tasks are schedulable. We do this in the Schedulable function on lines 1 - 5 of Algorithm 4. We set the number of particles for each task (line 2) and run them (line 3). Then, we set the WCET $C_{\tau}$ of the tasks based on their measured WCETs (line 4) and perform a response time analysis [29] to ensure the tasks are schedulable (line 5).
We use the Schedulable function in the particle fairness function Configure PF (lines 6 - 11). We define the lower and upper bounds $L$ and $U$ on line 7, and we binary search in this interval on lines 8 - 10. After the loop, the maximum multiple for which the system is schedulable is stored in $L$ . Based on this, we compute the particle count of each $\tau$ on line 11.
## VII Automotive Case Study
To demonstrate the applicability of ProbTime to program time-critical embedded systems, we develop a case study for a localization pipeline in an automotive testbed. This section first introduces the testbed, followed by an evaluation of ProbTime.
### VII-A Automotive Testbed
The automotive industry is undergoing a paradigm shift to face the challenges that emerge through the advent of autonomous driving [30]. These challenges require higher computational capacities and larger communication bandwidth than traditional automotive systems. This transition leads to a software-defined vehicle where almost all functions of the car will be enabled by software [31]. To demonstrate the applicability of our proposed methods in this context, we have developed an automotive testbed.
#### VII-A 1 Hardware
The testbed is designed around a 1:10 scale RC car. The E/E architecture consists of several distributed compute nodes of different characteristics connected via bus- and network-based interconnects (see Fig. 7).
<details>
<summary>x9.png Details</summary>

### Visual Description
## Diagram: Robot Sensor Configuration and Physical Prototype
### Overview
The image contains two sections:
1. **Left**: A technical diagram illustrating sensor placement and orientation on a robot chassis.
2. **Right**: A photograph of the physical robot prototype, showcasing its internal components and wiring.
---
### Components/Axes
#### Left Diagram (Sensor Configuration)
- **Legend**:
- **Time-of-Flight (ToF) Sensor**: Green triangle (▲)
- **Speed Sensor**: Red square (■)
- **Labels**:
- **Front**: Central axis with a 1.25m measurement line.
- **Right/Left**: Angled ToF sensors at ±15° from the front axis.
- **Key Elements**:
- Two red squares (Speed Sensors) aligned along the robot’s longitudinal axis.
- Three green triangles (ToF Sensors): one at the front, one at 15° to the right, and one at 15° to the left.
#### Right Photograph (Robot Prototype)
- **Structure**:
- Four-wheeled chassis with large off-road tires.
- Transparent plastic body revealing internal electronics.
- **Components**:
- **Power System**: Blue battery pack mounted at the rear.
- **Control Board**: Green circuit board with microcontrollers and sensors.
- **Wiring**: Red, blue, and yellow cables connecting motors, sensors, and power.
- **Mounting**: Metal brackets securing wheels and electronic modules.
---
### Detailed Analysis
#### Left Diagram
- **Sensor Placement**:
- **ToF Sensors**:
- Front: Directly aligned with the 1.25m measurement line.
- Right/Left: Positioned at 15° angles from the front axis, suggesting a 30° total field of view for obstacle detection.
- **Speed Sensors**:
- Located symmetrically along the robot’s centerline, likely for differential drive control.
- **Uncertainties**:
- The 15° angle is explicitly labeled only on the right side; symmetry is assumed for the left.
- The 1.25m distance is a linear measurement but lacks context for sensor range (e.g., detection resolution).
#### Right Photograph
- **Component Relationships**:
- The green circuit board (control hub) is centrally located, with motors and wheels at the extremities.
- Wiring follows a star topology, connecting all components to the central board.
- **Notable Details**:
- Exposed wiring and modular design suggest ease of maintenance or customization.
- The blue battery pack is positioned at the rear, balancing weight distribution.
---
### Key Observations
1. **Sensor Coverage**:
- ToF sensors provide a 30° field of view (15° left/right), enabling obstacle detection in a wide arc.
- Speed sensors are critical for real-time motor feedback, essential for precise movement.
2. **Design Trade-offs**:
- The transparent chassis prioritizes accessibility over aerodynamics, indicating a focus on prototyping or educational use.
- Exposed wiring may increase vulnerability to environmental factors (e.g., dust, moisture).
---
### Interpretation
- **Functionality**:
The robot integrates **ToF sensors** for spatial awareness and **Speed Sensors** for motor control, suggesting applications in autonomous navigation or robotic competitions.
- **Design Intent**:
The modular, open architecture implies a focus on experimentation, allowing users to modify sensor configurations or add components.
- **Limitations**:
The 1.25m ToF range may restrict performance in cluttered environments, and the lack of enclosure for wiring could reduce durability.
This configuration balances simplicity and functionality, making it suitable for educational robotics or hobbyist projects.
</details>
Figure 7: Overview of the sensors on the car (left) and a picture of the RC-Car testbed (right).
Two microcontroller-based compute nodes act as the interface to most actuators and sensors of the car. High-performance compute nodes provide the required performance to host computation required for Advanced Driving Assistance (ADAS) or Autonomous Driving (AD) workloads. We use a Raspberry Pi 4B (4 Cortex-A72 cores at 1.5GHz). These compute nodes are connected by a local Ethernet network.
We use three Time-of-Flight (TOF) sensors installed on the car, as indicated in Fig. 7. These sensors have high accuracy but can only measure distances up to $1.25~{}\text{m}$ . The front wheel sensors report the current speed of the car.
#### VII-A 2 Software
<details>
<summary>x10.png Details</summary>

### Visual Description
## Flowchart: System-to-Executable Compilation Process
### Overview
This flowchart illustrates a multi-stage compilation process transforming system declarations into platform executables. The process involves system specification, code generation, compilation, and final executable production, with explicit dependencies between components.
### Components/Axes
1. **System Declaration** (Top-left)
- Initial input specification
2. **Platform Specification** (Top-center)
- Configuration parameters
3. **Platform .c/.h** (Top-right)
- Core platform implementation
4. **ProbTime Interface .c/.h** (Bottom-right)
- Probabilistic timing interface
5. **Mapping** (Bottom-left)
- System-to-platform translation
6. **Code Generation** (Central blue node)
- Intermediate code production
7. **Compiler** (Central blue node)
- Code optimization and compilation
8. **Platform Executables** (Far right)
- Final output products
### Detailed Analysis
- **System Declaration** → **Platform Specification**: System requirements feed into platform configuration
- **Platform Specification** ↓ **Mapping**: Configuration enables system-to-platform translation
- **Mapping** → **Code Generation**: Translated specifications generate intermediate code
- **Code Generation** → **ProbTime Interface .c/.h**: Timing constraints are encoded
- **Platform .c/.h** and **ProbTime Interface .c/.h** → **Compiler**: Dual input compilation
- **Compiler** → **Platform Executables**: Optimized binaries are produced
### Key Observations
1. **Dual Input Compilation**: Both platform implementation and timing interfaces are required for compilation
2. **Sequential Dependency Chain**: Each stage depends on prior outputs
3. **Color Coding**: Blue nodes (Code Generation, Compiler) highlight critical transformation stages
4. **Bidirectional Flow**: Arrows show both vertical and horizontal dependencies
### Interpretation
This diagram represents a typical embedded systems development pipeline where:
- System-level abstractions (declarations/specifications) must be translated into concrete implementations (code generation)
- Timing constraints (ProbTime Interface) are critical for real-time systems
- The compiler acts as a convergence point requiring both platform implementation and timing information
- The final executables represent the physical manifestation of initial system requirements
The blue coloring of Code Generation and Compiler nodes suggests these are the most computationally intensive stages. The absence of error handling paths implies this represents an idealized, linear workflow.
</details>
Figure 8: Code generation and compilation of the platform code, including a ProbTime interface.
The operating system on the Raspberry Pi 4B compute node is Linux-based with kernel 5.15.55-rt48-v8+ SMP PREEMPT_RT. The platform software is responsible for reading sensor data and controlling actuators. This is realized by dedicated periodic tasks distributed on different compute nodes. Each task is assigned a static priority and is mapped to a CPU core. Platform software realizes communication between ProbTime tasks by communication buffers located in shared memory. Additionally, sensor data is written to ProbTime communication buffers and from buffers to actuators.
We use code generation to produce the files that constitute the interface to a specific ProbTime program (see Fig. 8). Input to the code generation is the system declaration describing the ProbTime program. The remaining platform software is not affected by code generation. The compiler produces an executable for each compute node in the final step.
### VII-B Evaluation Questions
To evaluate ProbTime and our automatic configuration, we pose the following evaluation questions:
1. To what extent can ProbTime’s timing and probabilistic constructs be used to implement a realistic application?
1. What is the trade-off between the number of particles, execution time, and inference accuracy?
1. How does execution-time fairness compare to particle fairness concerning allocation of particles and inference time when varying the importance of tasks?
1. To what extent and accuracy can the case study—when executed on the physical car—determine the position and avoid collision?
### VII-C Positioning and Braking in ProbTime (Q1)
We implement a positioning and braking system in ProbTime as a case study. The system estimates the position of the RC car while it is moving using sensor inputs. Further, the system reacts by braking to prevent collisions with obstacles.
We present an overview of our positioning and braking system in Fig. 9. The figure shows the sensor inputs on the left-hand side. The steering angle is an input signal from the controller, while the other five sensors are directly related to sensors on the car (as shown in Fig. 7). We use the brake actuator (right-hand side) to activate the emergency brakes.
<details>
<summary>x11.png Details</summary>

### Visual Description
## Diagram: Vehicle Control System Architecture
### Overview
The diagram illustrates a control system for a vehicle, depicting the flow of data from sensor inputs to processing units and final actuation. It includes feedback loops and decision-making components to manage speed, positioning, and braking.
### Components/Axes
- **Input Sensors (Ovals)**:
- `speedLeft`: Measures left-side speed.
- `speedRight`: Measures right-side speed.
- `sideLeft`: Measures distance to the left.
- `sideRight`: Measures distance to the right.
- `frontCenter`: Measures distance to the front center.
- `steeringAngle`: Measures the current steering angle.
- **Processing Units (Rectangles)**:
- `speed`: Aggregates `speedLeft` and `speedRight`.
- `distance_SL`: Processes `sideLeft` input.
- `distance_SR`: Processes `sideRight` input.
- `distance_FC`: Processes `frontCenter` input.
- `pos`: Central processing unit integrating `speed`, `distance_SL`, `distance_SR`, `distance_FC`, and `steeringAngle`.
- `braking`: Receives input from `pos` and outputs to `brake`.
- **Output (Oval)**:
- `brake`: Final actuation command.
### Detailed Analysis
- **Flow Direction**:
- Sensor inputs (`speedLeft`, `speedRight`, `sideLeft`, `sideRight`, `frontCenter`, `steeringAngle`) feed into their respective processing units (`speed`, `distance_SL`, `distance_SR`, `distance_FC`).
- The `pos` unit aggregates data from all processing units and the `steeringAngle` sensor.
- `pos` sends signals to both `braking` and itself via feedback loops.
- `braking` processes `pos` data and outputs to `brake`.
- **Feedback Loops**:
- `pos` has bidirectional connections to `distance_SL`, `distance_SR`, `distance_FC`, and `speed`, indicating real-time adjustments based on sensor data.
### Key Observations
1. **Centralized Control**: The `pos` unit acts as the central hub, integrating all sensor data and steering inputs.
2. **Braking Dependency**: Braking is directly influenced by the `pos` unit, suggesting dynamic braking adjustments based on positional data.
3. **Feedback Mechanism**: Feedback loops from `pos` to distance sensors and speed imply continuous recalibration of inputs.
### Interpretation
This diagram represents a closed-loop control system for autonomous or semi-autonomous vehicles. The system prioritizes real-time data integration (`pos`) to make decisions about speed, steering, and braking. Feedback loops ensure adaptability to changing conditions (e.g., obstacles detected by `distance_SL/SR/FC`). The direct link between `pos` and `braking` highlights safety-critical responsiveness, while the aggregation of left/right speed data (`speed`) suggests balancing maneuvers. The architecture emphasizes redundancy and dynamic adjustment, critical for navigation and collision avoidance.
</details>
Figure 9: A graphical representation of the positioning and braking system.
The squares of Fig. 9 represent the tasks. We instantiate all distance estimation tasks (distance_SL, distance_SR, and distance_FC) from the same template. All tasks have a period of $500\text{\,}\mathrm{ms}$ except pos, which has a period of $1\text{\,}\mathrm{s}$ .
We use the speed task to estimate the car’s average speed since the last estimation based on speed observations from the wheels. Preliminary experiments show that the speed sensors are inaccurate at low speeds and that the top speed is reached almost instantaneously when accelerating. Due to the predictable behavior of the speed, we model it as being in one of three states: stationary, max speed, or transitioning (either accelerating or decelerating). This is simpler than using Bayesian linear regression (as we did in the speed model in Listing 2) and computationally cheaper.
Our positioning task estimates the x- and y-coordinates of the car’s center. It also estimates the direction the car faces relative to an encoded map of the environment (provided by the user). The model starts from a prior belief of the position (initially, a known starting position) and estimates a trajectory along which the car moved until the current logical time of the task based on steering angle observations and speed estimations. We update our belief by comparing each input distance estimation with what the sensor would have observed at that timestamp, given that the car followed the estimated trajectory. We implement this using Bayesian linear regression.
The braking task uses a probabilistic model to estimate the distance until a collision occurs, given steering angle observations, an encoded map, and the speed and position estimations. We activate the emergency brakes if the median distance is estimated to be below a fixed threshold.
In response to Q1, the current case study demonstrates that a non-trivial application can be efficiently implemented in ProbTime (using 600 lines of code).
### VII-D Experiments and Results
To answer evaluation questions Q2, Q3, and Q4, we perform three experiments on the positioning system of Fig. 9. We want to run as many particles per iteration as possible for the first experiment. Therefore, we run this on an Intel(R) Xeon(R) Gold 6148 CPU with 64 GB RAM using Ubuntu 22.04. We use the Raspberry Pi on the car for the second and third experiments. The first two experiments use pre-recorded data in a simulation, while the third uses live data. When discussing results or settings, we only consider the pos and braking tasks, as the other tasks perform no inference. The pos and braking tasks run on separate cores.
#### VII-D 1 Positioning Inference Accuracy (Q2)
We investigate how the number of particles impacts the inference accuracy and execution times. To be able to run more particles per inference, we perform a slow mode simulation where the replaying of observation data and the ProbTime tasks consider time to pass slower relative to the wall time. We use position estimates as a measure of inference accuracy because they are easily compared to a known final position from pre-recorded data. In this experiment, we vary the particle count of the pos task while fixing the particle count of braking.
In Fig. 10, we present the average position error along the x- and y-axis (left) and the WCETs (right) for the two tasks over $100$ runs with varying particle counts in the positioning task. Note that the positioning model often fails to track the car when using $10^{2}$ particles. The estimation error along both axes decreases drastically as we increase the number of particles, mainly when increasing from $10^{2}$ to $10^{3}$ particles. The increased variance going from $10^{4}$ to $10^{5}$ particles is likely due to inaccuracies in our model. The WCET of the pos task scales linearly with the number of particles (note the logarithmic axes), while the braking task becomes noticeably slower when we increase the number of particles to $10^{5}$ . This is because it processes a position distribution sent from the pos task (i.e., braking depends on pos).
In connection to Q2, we have seen the impact of particle count on the inference accuracy. Few particles lead to a big uncertainty, and the benefit of increasing the particle count is negligible past a certain point. We also see that the number of particles is linearly connected to the WCET and that the particle count of a task may impact the WCET of other tasks.
<details>
<summary>x12.png Details</summary>

### Visual Description
## Dual-Axis Error Analysis: Particle Count vs. Performance Metrics
### Overview
The image presents two side-by-side plots comparing error metrics and computational costs across varying particle counts (10² to 10⁵). The left plot uses violin plots to show error distributions, while the right uses bar charts for WCET comparisons.
### Components/Axes
**Left Plot (Violin Plots):**
- **X-axis**: "#particles" (log scale: 10², 10³, 10⁴, 10⁵)
- **Y-axis**: "Error (m)" (log scale: 10⁻² to 10²)
- **Legend**:
- Blue: "x-error"
- Orange: "y-error"
- **Violin Plot Features**:
- Median lines (horizontal black bars)
- Error bars (vertical blue/orange lines)
- Distribution density (shaded areas)
**Right Plot (Bar Chart):**
- **X-axis**: "#particles" (log scale: 10², 10³, 10⁴, 10⁵)
- **Y-axis**: "WCET (s)" (log scale: 10⁻¹ to 10¹)
- **Legend**:
- Blue: "pos"
- Orange: "braking"
- **Bar Features**:
- Height represents WCET magnitude
- Error bars (vertical lines at bar tops)
### Detailed Analysis
**Left Plot Trends:**
- **x-error** (blue):
- 10² particles: Median ~10⁻¹ m, range 10⁻²–10¹ m
- 10³ particles: Median ~10⁻² m, range 10⁻³–10⁻¹ m
- 10⁴ particles: Median ~10⁻³ m, range 10⁻⁴–10⁻² m
- 10⁵ particles: Median ~10⁻⁴ m, range 10⁻⁵–10⁻³ m
- **y-error** (orange):
- 10² particles: Median ~10⁻¹ m, range 10⁻²–10¹ m
- 10³ particles: Median ~10⁻² m, range 10⁻³–10⁻¹ m
- 10⁴ particles: Median ~10⁻³ m, range 10⁻⁴–10⁻² m
- 10⁵ particles: Median ~10⁻⁴ m, range 10⁻⁵–10⁻³ m
- **Key Pattern**: x-error consistently exceeds y-error by ~10× across all particle counts.
**Right Plot Trends:**
- **pos** (blue):
- 10² particles: ~10⁻¹ s
- 10³ particles: ~10⁻¹ s
- 10⁴ particles: ~10⁰ s
- 10⁵ particles: ~10¹ s
- **braking** (orange):
- 10² particles: ~10⁻¹ s
- 10³ particles: ~10⁻¹ s
- 10⁴ particles: ~10⁻¹ s
- 10⁵ particles: ~10⁰ s
- **Key Pattern**: pos WCET increases exponentially with particle count, while braking remains stable until 10⁵ particles.
### Key Observations
1. **Error Reduction**: Both x- and y-errors decrease by ~10× per decade of particle count.
2. **WCET Divergence**: pos WCET grows exponentially (10× per decade), while braking WCET plateaus until 10⁵ particles.
3. **Asymmetry**: x-error dominates y-error in magnitude across all particle counts.
### Interpretation
The data suggests a trade-off between precision and computational cost:
- **Precision**: Increasing particle count reduces positional errors (x/y) by ~10× per decade, but with diminishing returns at higher counts.
- **Computational Cost**: pos operations scale poorly with particle count (exponential growth), while braking operations remain efficient until extreme particle counts (10⁵).
- **Design Implications**: For applications requiring high precision (e.g., robotics), particle counts above 10⁴ may be impractical due to WCET constraints. The x-error/y-error asymmetry indicates potential optimization opportunities in coordinate-specific error modeling.
</details>
Figure 10: Results showing the estimation error along the x- and y-axis of the car (left) and the WCETs (right) over ten runs, varying the number of particles in the positioning model. Both figures have a logarithmic x- and y-axis.
#### VII-D 2 Automatic Configuration (Q3)
To compare the automatic configuration using execution-time and particle fairness, we vary the ratio of importance between the pos and braking tasks. For each ratio, we measure the particle count and WCET of the configured tasks and the number of iterations needed to configure the system.
The resulting ratio of particle count and execution time is presented in Fig. 11. In our experiment, we vary the importance ratio between tasks by powers of two until the starting state cannot be scheduled. As expected, both approaches to fairness have one property scaling at the same rate as the importance ratio (WCET ratio for execution-time fairness and particle count ratio for particle fairness).
The WCET ratio for particle fairness peaks when the ratio imbalance is too large (e.g., when the ratio is $2^{8}$ ) as the more important task fully utilizes its core. Increasing the ratio imbalance past this point only causes the less important task to run fewer particles. For execution-time fairness, this imbalance results in the less important task being allocated an insufficient budget due to fair utilization.
Execution-time fairness needs $48.8\pm 11.8$ iterations (mean and standard deviation) to configure all tasks compared to $16.5\pm 2.8$ for particle fairness. Recall from Sec. VI that execution-time fairness considers task dependencies while particle fairness runs independently of them.
For our model, the problem of task dependencies is hardly noticeable. However, what if the speed task also performed inference? To investigate this, we configure a variant of the positioning and braking model where speed performs inference, with it running on a separate core from pos and braking, and all three tasks have the same importance. We find that the tasks all get $385$ particles using particle fairness, whereas using execution-time fairness, pos gets $21$ particles, braking gets $95$ particles, and speed gets $24199$ particles. For execution-time fairness, the speed task has time to run many particles, which slows down the other tasks that depend on its output. This issue could be mitigated by setting an upper bound on the particle count (machine-specific) or by giving pos and braking higher importance.
<details>
<summary>x13.png Details</summary>

### Visual Description
## Line Chart: Ratio of Particles or WCET vs Importance Ratio (pos / braking)
### Overview
The chart displays four data series representing the ratio of particles or WCET (Worst-Case Execution Time) across varying importance ratios (pos / braking). The x-axis uses a logarithmic scale from 2⁻¹⁰ to 2¹⁰, while the y-axis also uses a logarithmic scale from 2⁻¹⁰ to 2¹⁰. The legend is positioned in the top-right corner, with distinct markers and colors for each series.
### Components/Axes
- **X-axis**: "Importance Ratio (pos / braking)" with values ranging from 2⁻¹⁰ to 2¹⁰ (logarithmic scale).
- **Y-axis**: "Ratio of particles or WCET" with values from 2⁻¹⁰ to 2¹⁰ (logarithmic scale).
- **Legend**:
- **EF_P**: Orange dash (triangle marker).
- **EF_W**: Orange triangle (triangle marker).
- **PF_P**: Blue cross (cross marker).
- **PF_W**: Blue square (square marker).
### Detailed Analysis
1. **EF_P (Orange Dash)**:
- Starts near 2⁻⁸ at 2⁻¹⁰ importance ratio.
- Increases steadily, reaching ~2⁻² at 2⁰.
- Sharp spike to ~2⁸ at 2⁶, then stabilizes near 2⁸ at 2¹⁰.
2. **EF_W (Orange Triangle)**:
- Begins near 2⁻⁸ at 2⁻¹⁰.
- Rises gradually, peaking at ~2⁴ at 2⁴.
- Drops slightly to ~2³ at 2⁶, then rises to ~2⁵ at 2¹⁰.
3. **PF_P (Blue Cross)**:
- Starts near 2⁻⁸ at 2⁻¹⁰.
- Increases steadily, reaching ~2⁻² at 2⁰.
- Rises to ~2⁴ at 2⁴, ~2⁶ at 2⁶, and ~2⁸ at 2¹⁰.
4. **PF_W (Blue Square)**:
- Begins near 2⁻⁸ at 2⁻¹⁰.
- Follows a similar trend to PF_P but with slightly lower values.
- Reaches ~2⁴ at 2⁴, ~2⁵ at 2⁶, and ~2⁶ at 2¹⁰.
### Key Observations
- **EF_P** exhibits a dramatic spike at 2⁶, far exceeding other series, suggesting an outlier or threshold effect.
- **EF_W** and **PF_P** show similar growth patterns but diverge at higher importance ratios.
- **PF_P** and **PF_W** are closely aligned, indicating minimal differentiation between their trends.
- All series follow a logarithmic scale, with values increasing exponentially as the importance ratio grows.
### Interpretation
The data suggests that **EF_P** experiences a significant deviation at 2⁶ importance ratio, potentially indicating a critical threshold or anomaly in particle/WCET behavior. The **PF** series (PF_P and PF_W) demonstrate consistent growth, implying stable or predictable relationships between importance ratio and particle/WCET ratios. The **EF** series (EF_P and EF_W) diverge more, with EF_P showing a pronounced spike, which could reflect distinct mechanisms or constraints in their respective models. The logarithmic scaling emphasizes exponential trends, highlighting the sensitivity of particle/WCET ratios to changes in importance ratio.
</details>
Figure 11: The ratio of particles (using suffix P) and worst-case execution time (suffix W) when varying the importance ratio. We use prefix EF for execution-time fairness results and PF for particle fairness. Note the logarithmic x- and y-axes.
<details>
<summary>extracted/5716520/figures/corridor.png Details</summary>

### Visual Description
## Heatmap: Unlabeled Spatial Distribution
### Overview
The image depicts a heatmap with a predominantly purple background, a single bright yellow data point in the top-right quadrant, and several small black rectangular regions scattered across the grid. No numerical values, scales, or explicit labels are visible on the axes or legend.
### Components/Axes
- **X-Axis**: Labeled "X-Axis" (no units or scale markers).
- **Y-Axis**: Labeled "Y-Axis" (no units or scale markers).
- **Legend**: Positioned in the top-left corner but illegible (no visible labels or color mappings).
- **Grid**: Implied by the rectangular regions, but no gridlines are explicitly drawn.
### Detailed Analysis
- **Yellow Data Point**: Located at approximately (X=0.8, Y=0.2) relative to the image dimensions. This is the only non-purple region, suggesting it represents the highest intensity or value in the dataset.
- **Black Rectangles**: Six small black rectangles are distributed as follows:
1. Bottom-left quadrant (X≈0.1, Y≈0.1).
2. Center-left (X≈0.3, Y≈0.5).
3. Top-center (X≈0.5, Y≈0.8).
4. Bottom-right (X≈0.7, Y≈0.3).
5. Top-right (X≈0.9, Y≈0.6).
6. Bottom-center (X≈0.4, Y≈0.9).
These regions are uniformly dark, indicating either zero values or missing data.
- **Purple Background**: Dominates the remaining areas, with no gradient or variation in intensity.
### Key Observations
1. **Isolated High-Intensity Region**: The yellow spot in the top-right quadrant is the sole area with detectable activity, contrasting sharply with the surrounding purple.
2. **Black Rectangles**: Their uniform darkness and irregular placement suggest they may represent excluded zones, errors, or regions of interest (ROIs) with no data.
3. **Missing Scale/Units**: The absence of axis labels or a legend prevents quantitative interpretation of the yellow spot’s magnitude.
### Interpretation
The heatmap likely represents a spatial distribution where the yellow spot corresponds to a critical event, anomaly, or maximum value (e.g., temperature, concentration, or activity). The black rectangles could denote obstacles, boundaries, or areas excluded from analysis. However, the lack of a visible legend and axis scales introduces ambiguity:
- The yellow spot’s intensity cannot be quantified without a colorbar or numerical scale.
- The black rectangles’ purpose (e.g., masking, noise, or categorical exclusion) remains unclear.
- The uniform purple background may imply a baseline value, but this is speculative without explicit labeling.
This visualization emphasizes a single focal point but lacks the contextual details necessary for robust analysis. Further clarification on the legend, axis units, and data source is required to draw definitive conclusions.
</details>
Figure 12: The encoded map ( $18.6\text{\,}\mathrm{m}$ x $9.5\text{\,}\mathrm{m}$ ) of the corridor in which we drive the car. The overlay shows particles spreading when less information is available (top-right) and gathering when close to distinguishable obstacles (bottom-left).
In connection to Q3, we argue that particle fairness is preferable. First, the problems above only apply to execution-time fairness. Second, there is a direct connection between particle count and inference accuracy. Therefore, we argue that reasoning about the allocation of particles directly seems more natural than doing it in terms of execution time.
#### VII-D 3 Positioning and Braking (Q4)
For this experiment, we run the tasks on the physical car. To evaluate the positioning model, we drive the car around in a corridor (Fig. 12).
As we measure the car’s position by hand, we can only make measurements when the car is stationary. Therefore, we drive it from a known starting position and compare the actual final position to the last position estimated by the model. We drive the car in a clockwise direction for a full lap ten times using two distinct scenarios to illustrate how environmental details impact the accuracy of the position estimates. In scenario 1, we start at the bottom-left corner and drive past the starting point. In scenario 2, we begin in the top-right corner and stop along the long corridor at the top. During preliminary evaluations, we found that using importance $4$ for pos and $1$ for braking yields good results. The configured system runs inference using $1312$ particles in pos and $328$ in braking.
We evaluate the automated braking by driving the car around in the corridor. We find that the emergency brakes are conservative—while they prevent collisions, they often activate in situations where it is unnecessary. This is due to uncertainty in the position estimations. Due to the conservative nature of the braking task, we turn off the brake actuator when evaluating the positioning model.
We present the positioning results in Table I. The error along the y-axis is small in scenario 2 because both side sensors are in range in the narrow corridor where the car stops, while the x-axis error is large because the front distance sensor is outside of its range in the corridor. Despite the short range of our sensors, the lack of distinguishing details in the environment, and the limited computational power of the car testbed, our model successfully tracks the car for a full lap in the corridor. Tracking the car in itself is not a novelty, but it shows that the methodology works in practice for a non-trivial application.
TABLE I: Results showing the average error in the x- and y-direction (in meters) and the Euclidean distance for our two scenarios, with standard deviations.
| 1 2 | $0.50\pm 0.58$ $1.19\pm 0.79$ | $0.28\pm 0.23$ $0.24\pm 0.14$ | $0.63\pm 0.56$ $1.21\pm 0.80$ |
| --- | --- | --- | --- |
## VIII Related Work
Although surprisingly little has been done in the intersection of probabilistic programming and languages for real-time systems, there is a large body of work in each category. This section gives a summary of the closest related work.
Probabilistic programming languages. Languages for performing Bayesian inference date back to the early 1990s, where BUGS [32] is one of the earliest languages for Bayesian networks. The term probabilistic programming was coined much later but is now generally used for all languages performing Bayesian inference. Starting with the Church language [33], a new, more general form of probabilistic programming languages emerged, called universal PPLs. In such languages, the probabilistic model is encoded in a Turing complete language, which enables stochastic branching and possibly an infinite number of random variables. There exist many experimental PPLs in this category, e.g., Gen [11], Anglican [9], Turing [10], WebPPL [7], Pyro [6], and CorePPL [12]. Other state-of-the-art PPL environments are, e.g., Stan [8] and Infer.NET. Similar to many of these environments, ProbTime also makes use of the standard core constructs (sample, observe, and infer), where the key difference is that time and timeliness are also taken into consideration in ProbTime.
Real-time programming languages. There exists a quite large body of literature on programming languages that incorporate time and timing as part of the language semantics. Ada [19] has explicit support for timed language constructs, and the real-time extension to Java [34] supports time and event handling. Recently, Timed C was proposed in the research literature to incorporate timing constructs directly in the language. The Timed C work was later extended to support sensitivity analysis [35]. Our work has been inspired by Timed C; the semantics for periodic is loosely based on sdelay in Timed C, although the implementation is different. As in Timed C, ProbTime introduces tagging with timestamps. There are several other languages and environments that have timestamped and tagged values as central semantic concepts. For instance, PTIDES [36] extends the discrete-event (DE) model. Lingua Franca [18] is another recent effort that introduces deterministic actors called reactors, whose only means of communication is through an ordered set of reactions.
Synchronous programming. There are also other approaches to programming real-time systems where time is abstracted as ticks. Specifically, the synchronous programming paradigm [37] implements such an approach, where ProbZelus [20] is a recent programming language extending synchronous reactive programming with probabilistic constructs. The ProbZelus work is based on the delayed sampling concept [38] to automatically make inference efficient when conjugate priors can be exposed in the model. A key difference between ProbZelus and ProbTime is our concept of fairness, where the number of particles used in inference is automatically adjusted based on user-provided importance values.
Reward-based scheduling. Our automatic configuration is related to reward-based scheduling [39], where tasks consist of a mandatory and an optional part. A non-decreasing reward function maps the time spent running the optional part to a real value. This approach aims to find a schedule for maximizing the reward while all tasks remain schedulable. In ProbTime, inference is the optional part of tasks as we can vary how many particles we use. The importance values assigned to tasks implicitly encode a reward function.
## IX Conclusion
In this paper, we introduce a new kind of language called real-time probabilistic programming language (RTPPL). To demonstrate this new kind of language, we develop and analyze a domain-specific RTPPL called ProbTime. We also introduce the concepts of fairness in RTPPLs and illustrate the strength of particle fairness.
## Acknowledgment
We would like to thank John Wikman, Linnea Stjerna, Gizem Çaylak, Viktor Palmkvist, Anders Ågren Thuné, Thilanka Thilakasiri, and Rodolfo Jordão for their valuable feedback.
This project is financially supported by the Swedish Foundation for Strategic Research (FFL15-0032 and RIT15-0012). The research has also been carried out as part of the Vinnova Competence Center for Trustworthy Edge Computing Systems and Applications (TECoSA) at the KTH Royal Institute of Technology.
## References
- [1] A. D. Gordon, T. A. Henzinger, A. V. Nori, and S. K. Rajamani, “Probabilistic programming,” in Proceedings of the on Future of Software Engineering. ACM, 2014, pp. 167–181.
- [2] J.-W. van de Meent, B. Paige, H. Yang, and F. Wood, “An introduction to probabilistic programming,” arXiv preprint arXiv:1809.10756, 2018.
- [3] A. Doucet, N. De Freitas, and N. Gordon, “An introductiona to sequential Monte Carlo methods,” Sequential Monte Carlo methods in practice, pp. 3–14, 2001.
- [4] Lundén, Daniel and Borgström, Johannes and Broman, David, “Correctness of Sequential Monte Carlo Inference for Probabilistic Programming Languages,” in Proceedings of 30th European Symposium on Programming (ESOP), ser. LNCS, vol. 12648. Springer, 2021, pp. 404–431.
- [5] S. Brooks, “Markov chain Monte Carlo method and its application,” Journal of the royal statistical society: series D (the Statistician), vol. 47, no. 1, pp. 69–100, 1998.
- [6] E. Bingham, J. P. Chen, M. Jankowiak, F. Obermeyer, N. Pradhan, T. Karaletsos, R. Singh, P. Szerlip, P. Horsfall, and N. D. Goodman, “Pyro: Deep Universal Probabilistic Programming,” Journal of Machine Learning Research, vol. 20, no. 28, pp. 1–6, 2019.
- [7] N. D. Goodman and A. Stuhlmüller, “The Design and Implementation of Probabilistic Programming Languages,” http://dippl.org/ [Last accessed: May 17, 2023].
- [8] B. Carpenter, A. Gelman, M. D. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li, and A. Riddell, “Stan: A probabilistic programming language,” Journal of Statistical Software, vol. 76, no. 1, 2017.
- [9] D. Tolpin, J.-W. van de Meent, H. Yang, and F. Wood, “Design and implementation of probabilistic programming language anglican,” in Proceedings of the 28th Symposium on the Implementation and Application of Functional Programming Languages, ser. IFL 2016. New York, NY, USA: ACM, 2016, pp. 6:1–6:12.
- [10] H. Ge, K. Xu, and Z. Ghahramani, “Turing: a language for flexible probabilistic inference,” in International conference on artificial intelligence and statistics (AISTATS). PMLR, 2018, pp. 1682–1690.
- [11] M. F. Cusumano-Towner, F. A. Saad, A. K. Lew, and V. K. Mansinghka, “Gen: a general-purpose probabilistic programming system with programmable inference,” in Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). ACM, 2019, pp. 221–236.
- [12] Lundén, Daniel and Öhman, Joey and Kudlicka, Jan and Senderov, Viktor and Ronquist, Fredrik and Broman, David, “Compiling Universal Probabilistic Programming Languages with Efficient Parallel Sequential Monte Carlo Inference,” in Proceedings of 31th European Symposium on Programming (ESOP), 2022, pp. 29–56.
- [13] A. Lew, M. Agrawal, D. Sontag, and V. Mansinghka, “PClean: Bayesian data cleaning at scale with domain-specific probabilistic programming,” in International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 2021, pp. 1927–1935.
- [14] F. Ronquist, J. Kudlicka, V. Senderov, J. Borgström, N. Lartillot, D. Lundén, L. Murray, T. B. Schön, and D. Broman, “Universal probabilistic programming offers a powerful approach to statistical phylogenetics,” Communications Biology, vol. 4, no. 1, pp. 1–10, 2021.
- [15] N. Gothoskar, M. Cusumano-Towner, B. Zinberg, M. Ghavamizadeh, F. Pollok, A. Garrett, J. Tenenbaum, D. Gutfreund, and V. Mansinghka, “3DP3: 3D Scene Perception via Probabilistic Programming,” Advances in Neural Information Processing Systems (NeurIPS), vol. 34, pp. 9600–9612, 2021.
- [16] N. D. Goodman, J. B. Tenenbaum, and T. P. Contributors, “Probabilistic Models of Cognition,” 2016, (2nd ed.) https://probmods.org/ [Last accessed: May 17, 2023].
- [17] S. Natarajan and D. Broman, “Timed C: An Extension to the C Programming Language for Real-Time Systems,” in Proceedings of IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), 2018.
- [18] M. Lohstroh, C. Menard, S. Bateni, and E. A. Lee, “Toward a Lingua Franca for Deterministic Concurrent Systems,” ACM Transactions on Embedded Computing Systems (TECS), vol. 20, no. 4, pp. 1–27, 2021.
- [19] A. Burns and A. Wellings, Concurrent and real-time programming in Ada. Cambridge University Press, 2007.
- [20] G. Baudart, L. Mandel, E. Atkinson, B. Sherman, M. Pouzet, and M. Carbin, “Reactive Probabilistic Programming,” in Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2020, pp. 898–912.
- [21] G. Baudart, L. Mandel, and R. Tekin, “Jax based parallel inference for reactive probabilistic programming,” in Proceedings of the 23rd ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, and Tools for Embedded Systems, 2022, pp. 26–36.
- [22] A. Pfeffer, “Ctppl: A continuous time probabilistic programming language.” in IJCAI, 2009, pp. 1943–1950.
- [23] D. Broman, “A Vision of Miking: Interactive Programmatic Modeling, Sound Language Composition, and Self-Learning Compilation,” in Proceedings of the 12th ACM SIGPLAN International Conference on Software Language Engineering, ser. SLE ’19. ACM, 2019, pp. 55–60.
- [24] F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, and P.-J. Nordlund, “Particle filters for positioning, navigation, and tracking,” IEEE Transactions on signal processing, vol. 50, no. 2, pp. 425–437, 2002.
- [25] C. A. Naesseth, F. Lindsten, T. B. Schön et al., “Elements of sequential monte carlo,” Foundations and Trends® in Machine Learning, vol. 12, no. 3, pp. 307–392, 2019.
- [26] C. M. Kirsch and A. Sokolova, “The logical execution time paradigm,” in Advances in Real-Time Systems (to Georg Färber on the occasion of his appointment as Professor Emeritus at TU München after leading the Lehrstuhl für Realzeit-Computersysteme for 34 illustrious years), S. Chakraborty and J. Eberspächer, Eds. Springer, 2012, pp. 103–120.
- [27] C. L. Liu and J. W. Layland, “Scheduling algorithms for multiprogramming in a hard-real-time environment,” Journal of the ACM (JACM), vol. 20, no. 1, pp. 46–61, 1973.
- [28] M. Lohstroh, E. A. Lee, S. A. Edwards, and D. Broman, “Logical Time for Reactive Software,” in Proceedings of Cyber-Physical Systems and Internet of Things Week, 2023, pp. 313–318.
- [29] E. Bini, M. Di Natale, and G. Buttazzo, “Sensitivity analysis for fixed-priority real-time systems,” Real-Time Systems, vol. 39, pp. 5–30, 2008.
- [30] D. Zerfowski and D. Buttle, “Paradigm shift in the market for automotive software,” ATZ worldwide, vol. 121, no. 9, pp. 28–33, 2019.
- [31] H. Windpassinger, “On the way to a software-defined vehicle,” ATZelectronics worldwide, vol. 17, no. 7, pp. 48–51, 2022.
- [32] W. R. Gilks, A. Thomas, and D. J. Spiegelhalter, “A language and program for complex Bayesian modelling,” The Statistician, vol. 43, no. 1, pp. 169–177, 1994.
- [33] N. D. Goodman, V. K. Mansinghka, D. Roy, K. Bonawitz, and J. B. Tenenbaum, “Church: A language for generative models,” in Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, ser. UAI’08. Arlington, Virginia, United States: AUAI Press, 2008, pp. 220–229.
- [34] G. Bollella and J. Gosling, “The real-time specification for java,” Computer, vol. 33, no. 6, pp. 47–54, 2000.
- [35] S. Natarajan, M. Nasri, D. Broman, B. B. Brandenburg, and G. Nelissen, “From code to weakly hard constraints: A pragmatic end-to-end toolchain for Timed C,” in Proccedings of the IEEE Real-Time Systems Symposium (RTSS). IEEE, 2019, pp. 167–180.
- [36] J. C. Eidson, E. A. Lee, S. Matic, S. A. Seshia, and J. Zou, “Distributed real-time software for cyber–physical systems,” Proceedings of the IEEE, vol. 100, no. 1, pp. 45–59, 2011.
- [37] A. Benveniste and G. Berry, “The Synchronous Approach to Reactive and Real-Time Systems,” Proc. of the IEEE, vol. 79, no. 9, pp. 1270–1282, 1991.
- [38] L. Murray, D. Lundén, J. Kudlicka, D. Broman, and T. Schön, “Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs,” in Proceedings of Machine Learning Research : International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 2018.
- [39] H. Aydin, R. Melhem, D. Mossé, and P. Mejia-Alvarez, “Optimal reward-based scheduling for periodic real-time tasks,” IEEE Transactions on Computers, vol. 50, no. 2, pp. 111–130, 2001.