Khan, J. Ramil, and W. Types of software evo- lution and software maintenance. Journal of software maintenance and evolution: Research and Practice, 13 1 :3—30, Clements, R. Kazman, and M. Evaluating software architectures: methods and case studies. Addison-Wesley Reading, MA, ISSN: Refactoring: Improving the Design of Existing Code. Addison-Wesley Longman, Godfrey and D.
The past, present, and future of software evolution. In Frontiers of Software Maintenance, FoSM Design patterns: elements of reusable object-oriented software, volume Addison-wesley Reading, MA, Graves, M. Harrold, J. Kim, A. Porter, and G. An empirical study of regression test selection techniques. Mens and T. A survey of software refactoring. Wermelinger, S. Ducasse, S.
Demeyer, R. Hirschfeld, and M. Challenges in software evolution. In 8th Int. Prechelt, B. Unger-Lamprecht, M. Philippsen, and WF Tichy. Two controlled ex- periments assessing the usefulness of design pattern documentation in program main- tenance. Rozanski and E. Software systems architecture: working with stakeholders using viewpoints and perspectives. Addison-Wesley Professional, Sommerville and J. An empirical study of industrial requirements engineer- ing process assessment and improvement. Tang, P.
Avgeriou, A. Jansen, R. Capilla, and M. Ali Babar. A comparative study of architecture knowledge management tools. Journal of Systems and Software, 83 3 —, Yet, not all types of necessary variations can be anticipated and unforeseen changes to software may happen. Thus, systems that are meant to live in such an open-ended world must provide self-adaptivity micro adap- tation , but there is an additional need for adaptability of the system so that it can be adjusted externally macro adaptation.
This paper gives an overview of the graph- based runtime adaptation framework GRAF and sketches how it targets both types of adaptation. But, in practice there have to be adaptive maintenance actions as well. A self-adaptive software system SASS is designed to face foreseen changes in its op- erating environment. Although SASSs today can already deal with changes in their environment, they are not well-suited to tackle an open-ended world, which is especially characterized by unforeseen changes [SEGL10].
To handle these changes, classical approaches for software evolution have to be applied. This paper complements the preceding publications by focusing on the use of GRAF for achieving longevity of software. It is organized as follows. Then, we introduce the core ideas behind micro and macro adaptation in Section 3 and give an overview on the implementation work and the case studies done up to now in Section 4. Finally, we conclude this paper in Section 6 and give an outlook on possible areas of future work.
The graph-based runtime adaptation framework GRAF acts as an external controller of the adaptable software. GRAF is not just an adaptation manager. The main layers and components are illustrated in Figure 1. Subsequently, we discuss the structure of the adaptable software and GRAF as well as the connecting interfaces.
The responsibilities and tasks of each framework layer are intro- duced and we explain how each of them contributes to the whole architecture of an SASS that can be extended in reaction to unforeseen changes. It is set up in a way so that it can be controlled by GRAF, which plays the role of the external adaptation management unit. Adaptable software can be built by migrating an existing, non-adaptable software system towards adaptability [ADET11].
Alternatively, it can be developed from scratch. When applying GRAF in a migration context, original elements are those software ele- ments e. In the context of creating adaptable software from scratch, these elements can be thought of as helper elements, such as classes provided by imported libraries. They are the building blocks that support a certain degree of variability by either providing data about changes, or, by offering actions to be used for adjusting the system.
Every adaptable software element needs to provide some of the interfaces for adaptation provided by GRAF as described in Table 1. Interface Description StateVar exposes variables that hold information about the oper- ating environment. SyncStateVar exposes variables similar to StateVar, but their rep- resentation in the runtime model can be also changed from outside. The new value is then propagated back to the managed adaptable software. AtomicAction exposes methods that can be used as atomic actions in the behavioral model.
They are used by the interpreter to execute individual actions that are composed inside of the behavioral model. These layers are described in more detail in the following. It provides a set of reusable components that are independent from the adaptable software. State Variable Adapters. State variable adapters support the propagation of changed values from annotated variables in the adaptable elements to the runtime model by us- ing the StateVar interface.
In addition, the SyncStateVar interface also uses these adapters when propagating tuned variable values from their runtime model representation back into the adaptable software. Model Interpreter. We give an excerpt in Figure 2. Each Action in the behavioral model is associated with an existing, im- plemented method in the adaptable software. Starting at an InitialNode and walking along a path in the behavioral model, the interpreter resolves conditions expressed as queries on the runtime model at DecisionNode vertices.
Interpretation terminates when a FinalNode vertex is reached and the adaptable software continues executing.
Klärwerk.info - Wegweiser
In addition, this stateful layer contains utility components that simplify and encapsulate necessary operations on the graph-based runtime model, such as evaluating queries or executing transformations. Runtime Model. An excerpt of the schema that is currently used is illustrated in Figure 2. According to this schema, every runtime model contains a set of StateVariable vertices to store exposed data from the adaptable software and Activity vertices to store entry points to behavior executable by runtime interpretation.
In this schema, behavioral models are kept as variants of UML activity diagrams which are represented by their abstract syntax graphs. In general, also other types of behavioral models such as Petri nets, statecharts, etc. Schema and Constraints. GRAF supports the validation of constraints on the runtime model. A set of constraints can be derived from the runtime model schema. At present, constraint checking mechanisms have been implemented only prototypically and are not yet used in the adaptation loop. Model Manager. The model manager is responsible for all tasks that are related to ma- nipulating the runtime model.
It keeps the runtime model in sync with the inner state of the adaptable software, which is exposed via the StateVar and SyncStateVar interfaces. In addition, the model manager acts as a controller for all other types of accesses to the runtime model and hence, it provides a set of utility services for querying and transform- ing. Furthermore, the model manager is responsible for the evaluation of constraints after a transformation has been applied.
It can even roll back changes to the runtime model and it informs the rule engine about this failed adaptation attempt. Model History. The rule engine can access this data by querying the history repository via the model manager. That way, the rule engine can learn from its past actions by using the available history.
The application of data mining techniques such as frequent itemset analysis [AS94] are possible once the repository contains a representative amount of data. Moreover, the collected data may provide valuable information for maintaining the SASS. The model history has not been implemented yet. This stateful layer is composed of a repository with adaptation rules and a rule engine that uses its own heuristics to plan adaptation. Rule Engine. In GRAF, the rule engine plays the role of an adaptation manager.
Moreover, it can receive additional information on the current or past state of the adaptable software by using the model history. After gathering all the required information for planning, the rule engine uses the set of available adaptation rules to choose a compound transformation to be executed on the runtime model via the model manager. Adaptation Rules. Adaptation rules are event-condition-action rules that describe atomic adaptation tasks. The three parts of an adaptation rule are as follows: 1.
The event for the application of a rule occurs whenever the runtime model changes and the rule engine needs to react. The model manager keeps track of these events. The condition is expressed by a boolean query on the runtime model that tests whether an adaptation action may be applied or not. The action is a model transformation and encodes the actual adaptation to be performed on the runtime model.
Control Panel. They can i observe the log of adaptation steps by querying the runtime model history, ii modify adaptation rules and model constraints as well as iii override decisions and heuristics of the rule engine. The presented version of GRAF does not implement the control panel. Due to the limits of SASSs in dealing with unforeseen situations as well as the increasing entropy disorder that may be caused by self-adaptation, phases of micro and macro adaptation need to alternate to achieve longevity.
A detailed description is given in [Der10]. When a change listener of the rule engine receives an event from the model manager, the rule engine analyzes the change and plans a compound transformation to be applied on the runtime model via the model manager. Afterwards, the runtime model is adjusted. Finally, an actual behavior change of the managed adaptable software has to be achieved.
Two different ways are supported in GRAF: i adaptation via state variable values and ii adaptation via interpretation of a behavioral sub- model. For this mechanism to work, the corresponding variable of the adaptable application must be exposed to GRAF via the SyncStateVar interface. If the runtime model value of such a state variable is changed by a transformation, the adaptable software is made aware of the new value whenever the variable is used in the adaptable software.
If this behavior is default behavior, there is no need for adaptation and the interpreter returns this information so that the adaptable element executes the existing default code block in the adaptable software. We start top-down, from the adaptation management layer. Ways to extend the adaptivity property of the SASS are to i add new adaptation rules to the repository, ii modify existing adaption rules in terms of their queries and transfor- mations or even iii remove adaptation rules. Such changes can be made via the external control panel. Ideally, the rule engine does not have to be adjusted.
Changes to its ana- lyzing and planning algorithms may be needed, though. By separating them into different modules and keeping them independent from the core, this issue can be solved. At the runtime model layer, further adjustments are possible. More complex changes are schema-related adjustments.
Developers may want to change the language for model- ing the runtime model and use Petri nets or statecharts instead of UML activity diagrams. Given such a heavy change, the adaptation rules will be affected as well and the model interpreter has to be adjusted or even replaced.
- The Dialogue Comes of Age: Christian Encounters With Other Traditions?
- Hochschule der Medien - Hochschule der Medien (HdM).
- Zadok The Priest;
- yhyzabuhoweh.ml - Tägliche Meldungen .
As the adaptation middleware is a generic communication layer, it stays stable for most of the time. Only a change of the runtime model schema description of syntax will result in a re-write of the model interpreter description of semantics. Changes to state variable adapters are only needed in cases where the concept of state variables is modeled differently by the runtime model schema.
There might be necessary changes at the level of the adaptable software as well. Again, existing state variables can be deleted as they may become obsolete over time. The necessity to start a macro adaption should be detected by the framework itself. This can be done by special rules that test some quality properties of the model or by analyzing the history information periodically, for instance.
Finally, traceability between different elements at all layers is important, e. With the help of aspect oriented programming AOP techniques and a powerful AOP library like JBoss AOP1 , these annotations are then used in pointcut expressions resulting in automatic instrumentation of the byte-code to connect the adaptable software to GRAF via its middleware.
Models, schemas as well as queries and transformations are implemented in the technological space of TGraphs [ERW08]. The generated Java classes represent vertex- and edge-types of runtime models TGraphs that conform to the runtime model schema. The generated API is then used for creating, accessing, and modifying runtime models. Although self-adaptive software can handle such challenges within given bounds, long living software systems need to be extendable and adaptable in an open- ended world, where changes are unforeseen.
Schmid et al. This paper strengthened our motivation to present GRAF in the context of longevity. At design time, a base model and its variant architecture models are created. The models include invariants and constraints that can be used to validate adaptation rules. Vogel and Giese [VG10] propose a model-driven approach which provides multiple archi- tectural runtime models at different abstraction levels. They derive abstract models from the source models. Similar to DiVA, this research also targets adaptivity from an architectural point of view.
Similar to GRAF, certain components of Rain- bow are reusable and the framework is able to perform model constraint evaluation. How- ever, this framework is mainly targeting architecture-level adaptation, where adaptive be- havior is achieved by composing components and connectors instead of interpreting a behavioral model. Similar to some of the presented work, our implementation of GRAF makes use of aspect oriented programming for binding the adaptable software to the framework.
In contrast to the mentioned research projects, we focus on the interpretation of behavioral models for achieving adaptability at runtime, while also supporting parameter adaptation. The construction of an SASS from scratch is supported as well. Based on the view of micro and macro adaptation, we plan to experiment with different scenarios to learn more about sequences of alternation between these two phases.
We believe, that integrating these two areas provides a promising way towards achieving longevity in software. Our thanks go to the anonymous reviewers for their thorough and inspiring feedback. To appear. Morgan Kaufmann Publishers Inc. Rainbow: Architecture-based self-adaptation with reusable infrastructure. IEEE Computer, —54, Submitted elsewhere. An Architectural Blueprint for Autonomic Computing. Autonomic Computing, White Paper. Morin, O. Barais, J. Jezequel, F. Fleurey, and A.
Models Run. Computer, 42 10 —51, October Gorlick, Richard N. Rosenblum, and Alexander L. Evolving Adapt- able Systems: Potential and Challenges. Adaptation and abstract runtime models. Zeit- und Kostendruck an ihre Grenzen. Unter den Teilnehmern aus der Industrie befanden sich Vertreter verschiedener Anwendungsgebiete, darunter Automatisierung, Automobilbau und Luftfahrt. Als Interessensschwerpunkte gaben die Teilnehmer die Gebiete Requirements Engineering, Architekturentwurf, modellbasierte Entwicklung und Testen, sowie Reengineering von eingebetteten Systemen an. Des Weiteren fand eine offene Diskussionsrunde statt, bei der die Erwartungen an Embedded Systems im Jahre gemeinsam mit allen Workshopteilnehmern diskutiert und hinterfragt wurden.
Als weitere Herausforderungen wurden die verteilte Entwicklung an mehreren Unternehmensstandorten und die Synchronisation mit anderen Entwicklungsmodellen z. Um dies zu erreichen, werden erweiterte System- und Safety-Modelle zur Auswertung von Fehlermodi und Zeitkonstanten eingesetzt. Current and future standards and maturity models impose high accu- racy and quality for the development process of such software-intensive, embedded systems. But nowadays, there are process and tooling gaps between different model- ing aspects for the system under development SUD.
In this paper we present a seamless, model-based development process, which is intended for the automotive supplier domain and conforms to the process reference model of Automotive SPICE R 1. The development process addresses the issues mentioned above by using systematic transitions between different modeling aspects and simulations in early development stages. They can be found in different products ranging from home appliances to complex transport systems. The part of functionality realized by software steadily increases in these systems. Conse- quently, also the number of safety critical functions realized by software grows, especially in the automotive sector.
These safety require- ments can be derived from international standards like the IEC or, for the automo- tive industry, the upcoming standard ISO For example, for a component with a high automotive safety integrity level ASIL , the data exchange via its interfaces has to be secured using CRC-checksums. Such safety requirements are, however, not restricted to the system under development SUD but also concern the development process. But since standards and process reference models have to be generally applicable to a broad range of organizations, they do not specify how the realization of the properties or artifacts can be achieved.
Models have found their way into current development processes of embedded systems, also in the automotive sector. This leads to several problems. First of all, there are process and tooling gaps between these models.
It is unclear how to move from, for example, tex- tual requirements to model-based design [WW03] and from the system architecture to the software architecture [DN08]. Thus, there is a need for integrated model chains [Bro06]. Second, traceability between different models and between models and requirements is nowadays established by manually linking the individual model elements and require- ments.
As the amount of requirements and hence the size of the models grow, the lack of automation for this task is increasingly a problem. Process reference models are too general to help on this problem, so a concrete development process is necessary. Since embedded systems consist of hardware and software parts, the interaction between these diverse elements is often not tested until the end of the development process when all software and hardware components are available.
If there are errors in the SUD, their correction is very expensive. We will show, how the system architecture can be extended to include all necessary information for simulations of the SUD already in early stages of our development process. As an ongoing example for an embedded system we are using a comfort control unit.
This electronic control unit ECU is responsible for the control of the interior light, the central locking, and the adjustment of windows, mirrors, and seats. The comfort control unit is not a high-grade safety-critical system, but it is in connection with other systems that have a high safety level. Thus, it also contains some functions to check the interior whether persons are aboard.
This information is necessary for the airbag system. In the next section we present a systematic development process for the automotive sup- plier domain and show the different artifacts, extensions, techniques, and their usage in the development process. In Section 3 the related work is discussed. At the end of the paper a conclusion and an outlook is presented. Figure 1 depicts those processes organized in a V-Model. In the following three subsections we present techniques supporting the engineering processes in the left part of the V-Model.
These perspectives cover different modeling aspects of the SUD and thus serve as means to achieve separation of concerns. Anal Analysis ysis ENG. Design ENG. Our approach starts with the manual creation of formalized, textual requirements, their valida- tion, and the transition to the model-based system architectural design cf.
Subsection 2. Thus, this subsection covers the engineering process ENG. In ENG. It can be simulated wrt. This is due to the fact that natural language is easy to use, because it does not require training or dedicated tools [Poh10], so all stake- holders can understand requirements formulated in this way. Thus, all further tasks like the validation of the requirements, the transition to the model-based design, and the maintenance of traceability have to be done manually, which is time-consuming, error-prone, and often repetitive.
Figure 2 sketches the requirements engineering within our development process. In the automotive domain mostly informal customer requirements e. The customer requirements specify behavior that is observable by the end users i. Figure 1. In the process of sys- tem requirements analysis ENG. Thus, this process belongs to the functional perspective. The system requirements analysis is one of the most important engineering processes, since the system requirements serve as basis for all further engineering pro- cesses [HMDZ08]. The CNL restricts the expressiveness of natural language and disambiguates it, enabling automatic processing of the requirements while having textual requirements understandable for all stakehold- ers at the same time.
We use a slightly extended CNL called requirement patterns that is already successfully used in the automotive industry [KK06]. For example, the system requirements in Figure 2 specify that the Comfort Control Unit contains the subsystem Interior Light Con- trol among others , which reacts to the signal Doors Unlocked.
By this means it is checked whether all required information is delivered by the surrounding systems. Firstly, the possibility of automatically processing the system requirements formulated with the requirement patterns enables to use an automated requirements validation. The validation detects requirements that violate rules for the function hierarchy like unused signals, a not-well-formed function hierarchy, overlapping de- activation conditions for functions, range violations and proposes corrections [HMvD11].
Secondly, by using pars- ing and model transformation techniques, we can transfer the information already con- tained by the system requirements to an initial system analysis model cf. Req2Sys in Figure 2. The model transforma- tions are realized in a research prototype and base on bidirectional Triple Graph Grammars TGGs [Sch95], which allow—besides the transformation to SysML—synchronization of system requirements and system analysis model as well as a translation to the text back again, if combined with code synthesis techniques.
Additionally, since the transforma- tion relates the requirements with the model elements, we automatically gain traceability. Further details are given in [Hol10]. This corresponds to the engineering process ENG. We use the systems modeling language SysML to specify the hardware and software subsystems including their interrelations.
Additionally, in this engineering step the decision has to be made which functions will be implemented in hardware and which will be implemented in software [HMDZ08]. Figure 3 depicts our view on the system architecture design as part of the architecture model.
- His Outstretched Hand?
- Packaging Business Club - Archiv .
- Life without Parole: Americas New Death Penalty? (The Charles Hamilton Houston Institute Series on Race and Justice)?
- Editorial Reviews;
- Maria Bertele!
For em- bedded systems, the behavior is often characterized by timing requirements. It contains two states representing Light on and Light off. The architectural model already contains most of the information required for such a simulation. In the automotive domain, the OS have no dynamic parts e. This enables the modeling of OS properties in the architecture model. Each time the task may run at most 10 ms.
For the simulation, we use the real-time simulation tool chronSIM2. The tool needs a dedicated simulation model, which contains information about the hardware architecture, the software components, the deployment of software to hardware, and the OS settings; that is, the information we specify in the extended architecture model. A detailed description of the transformation and how the simulation output looks like can be found in [NMK10]. The system architecture design covers different aspects of information and changes during the development process.
In an early development stage i. In later development stages i. Therefore, the system analysis model cf. These requirements are used as an input for the software design ENG. The second layer is the so called Runtime Environment RTE , which serves as a middleware for the communication of software components potentially located on differ- ent ECUs. The RTE is automatically created by code generators. The third layer is the so-called basic software, which contains partially hardware-dependent software providing basic services to the application software.
Examples for the basic software are the OS or the communication stack, which is responsible for the communication to other ECUs. In order to establish a seamless development from the requirements via the architectural model to the software design, a further model transformation from the architectural model to AUTOSAR is necessary. Our transformation consists of two parts. Their bidirec- tional nature enables synchronization of both models and round-trip capabilities. This is one of the major advantages over approaches using SysML and AUTOSAR in a comple- mentary but manual way: if changes on one model occur, they can be transferred to the other model automatically in order to reestablish consistency.
Those runnables are used in turn for the RTE generation. This can also be done for further elements of the RTE like exclusive areas, interrunnable variables, and so on. The transformation closes the gap to the software design ENG. The transition to software construction ENG. The gap between the two modeling languages is not closed by means of automatic model transformations, but by manually establishing traceability.
Therefore, the developer has to manually model both architectures and afterwards create traceability links. A further aim related to our work is a simulation of generated code including the OS properties. In contrast to our approach, EDONA focuses only on techniques and tools but does not consider any process reference model. Furthermore, the transition from textual requirements to the model-based design and the traceability be- tween them is not addressed. We explained how system requirements written in natural lan- guage can be formalized. This way, we enable an automated processing of the require- ments; that is, a validation and the extraction of information into an initial system analysis model, while all stakeholders are still able to understand them.
Furthermore, we extended the modeling languages with notations to specify timed behavior and OS properties. As explained in the introduction, Automotive SPICE R does not provide any guidelines which modeling language shall be used in an engineering process for the creation of an artifact. Thus, our development process can be seen as a possible instance of the Automo- tive SPICE R PRM, which additionally provides guidance for the developers on moving from the system requirements via the system architecture design and the software design to the implementation.
Thus, since errors are detected earlier, this leads to less expensive corrections. One of our further research goals is a tighter connection between the requirement patterns and the system real-time simulation, so that the requirements can be used as direct input for the simulation. Secondly, the requirement patterns describe a function hierarchy, but currently we do not specify how a function shall work. When we extended the CNL in such a way, it would be possible to check whether there is an already implemented component that realizes the function—and hence boosting reuse of components.
As we mentioned the upcoming standard ISO , another goal is to enhance our development process to easily integrate techniques for ensuring functional safety. Finally, we want to extend our current demonstrator models for a more extensive evaluation of the overall process. Version 2. Release v4. Challenges in Automotive Software Engineering. Rocky Nook, Eine erweiterte Sys- temmodellierung zur Entwicklung von softwareintensiven Anwendungen in der Automo- bilindustrie. Mit Satzmustern von textuellen Anforderungen zu Modellen.
Automobil Elektronik, Wie hoch ist die Performance? Automobil- Elektronik, — 38, Requirements Engineering: Fundamentals, Principles, and Techniques. IEEE Software, —24, Often used fault tolerance mecha- nisms have complex failure behavior and produce overhead compared to systems with- out such mechanisms. The question arises whether the overhead for fault tolerance is acceptable for the increased safety of a system. In this paper, an approach is presented that uses safety analysis models of fault tolerance mechanisms and execution times of its subcomponents to generate failure dependent execution times.
They are becoming more and more complex due to increasing functionality and automation in industry. The corresponding safety analysis models also grow in complex- ity and level of detail. To increase the safety of such systems, redundancies are often used within a certain mechanism to tolerate faults in the redundant units.
These so-called fault tolerance mechanisms are widely used concepts and have a known behavior. They produce an overhead, e. The problem arises whether the overhead produced by a fault tolerance mechanism is acceptable for the increased safety of a system. Since safety requirements often contain a deadline and an upper bound for failure probabil- ity, e. The mode itself depends on failure modes given by safety analysis models. The combina- tion of the time consumed by a mode and failure modes provides a detailed prospect of the timing behavior for a fault tolerance mechanism.
Thereby a trade-off analysis in terms of execution time is supported. In section 2, related approaches are described. In section 3, an example system is described that is used to introduce the safety analysis model of Component Fault Trees, which provides failure modes as an input for the approach presented in this paper. Section 4 is the central section of this paper. The example system is picked up to describe the problem of mod- eling failure-dependent overhead in time manually. The central approach of generating execution times for fault tolerance mechanisms according to failure modes of the safety analysis model is formalized and applied to the example system.
The generated execution times allow a sophisticated view of the overhead in time for such mechanisms. Section 5 concludes this paper and provides a perspective for future work. In static timing analysis, the execution times of individual static blocks are computed for a given program or a part of it. In gen- eral, these approaches provide safe upper bounds for the WCET by making pessimistic assumptions at the expense of overestimating the WCET in order to guarantee deadlines for the analyzed program. Advanced approaches, e. On the other hand, measurement-based approaches do not need to perform any complex analysis.
They measure the execution time of a program on real hardware or pro- cessor simulators. These approaches can, in general, not provide a safe upper bound for the WCET, since neither an initial state nor a given input sequence can be proven to be the one that produces the WCET. Both static timing analysis and measurement-based approaches do not encompass additional timing failure modes or failure probabilities for calculating probabilistic WCETs for several modes of an analyzed system.
However, approaches can be found that split the WCET in a probabilistic way. Here, the authors concentrate on how the nodes of a syntax tree have to be calculated if the WCETs for its leaves are given with probabilistic distribution. In a later work, the approach is extended to also encompass de- pendence structures. The authors solved the arising problem of multivariate distributions by using a stochastic tool for calculating a probabilistically distributed WCET.
To deter- mine the probability distribution, there have been some initial approaches in probabilistic WCET analysis. In [BE00], the authors use a measurement-based approach. The central idea is to measure the timings of a task and to stochastically drive the statement that a WCET will provide an upper bound for a certain subset of the input space. This approach is extended for scheduling in [BE01]. In contrast, the approach presented in this paper generates timing failure modes for fault tolerance mechanisms.
This approach can therefore be taken into account as input for previously described approaches such as [BCP03]. Current approaches that automatically deduce safety analysis models from an annotated system development model mainly shift the complexity from the safety analysis model to the system development model.
These approaches solve the problem of inconsistencies be- tween the two models by combining the failure logic with the system development model entities by using annotations [PM01, Gru06, Rug05]. Only a few approaches deduce failure behavior by se- mantically enriching the system development model. In [GD02], an approach is presented that supports the analysis at a high level by providing recurring safety analysis model con- structs mainly for redundancy mechanisms.
These constructs decrease the complexity of a manual safety analysis, but do not provide a solution for generating timing failure modes. In [dMBSA08], a larger design space is covered, but the high degree of detail in the safety analysis model is achieved at the expense of a large number of annotations. The approach presented in this paper belongs to the group of semantic enrichment, since parts of safety analysis models are used to deduce timing failures.
In the next section, an example system is introduced along with its safety analysis model as the running example of this paper. This section is used to introduce the methodology of Component Fault Trees. This safety analysis model relates parts of a fault tree and failure modes to components and ports see [KLM03] , what makes it an interesting model for combining execution times of components with their failure modes.
The approach presented in this paper uses CFTs, but also different safety analysis models, such as Generalized Stochastic Petri Nets or Markov Chains may provide input for it. Every CFT has input and output failure modes triangles. Within a CFT, conventional fault tree gates can be used to model the failure behavior of a component. If, e. The safety analysis model indeed provides failure modes that involve a overhead in time, but generating absolute values can become an error prone and time-consuming task.
Ex- ecution times have to be included in this calculation and additional timing failures com- plicate this process additionally. This is demonstrated in the next section and the central approach of this paper is presented that automates this process. As stated in the introduc- tion, there are failure modes in safety analysis models that correspond with a certain mode of a fault tolerance mechanism. If this failure mode is not active, the fault tolerance mechanism is in a different mode where only the P RIMARY redundant unit is executed. Such a set is here called a run.
Since the here presented approach aims at execution times, the sets of exe- cuted elements for those two modes are needed as an input. Those are depicted in table 2. This combination of additional timing failure modes and execution times provides a sophisticated view on the overhead in time produced by the fault tolerance mechanism. Calculating a larger number of such combinations manually is error prone and time consuming.
Each component has one associated execution time, here labeled as 0. Additionally, 0 is set to true to ease the later construction. These sets of execution times and their corresponding failure modes are used in the fol- lowing to generate the different execution times of a run. Using this sets, the all possible combinations of executions can be deducted.
To generate tuples of execution times and corresponding failure modes for an , the execution times are summed up and the failure modes are combined. In the following, execution times are combined by simply summing them up. This is true for many approaches that encompass processor states and stack values for calculating WCETs. Using this construction, the different execution times are combined with failure modes in a fashion as described for the example system at the beginning of this section. The construction of different alternate executions provides a sophisticated view on the overhead in time of fault tolerance mechanisms.
The automated construction of possible alternates is less error prone than a manual approach and is capable to handle a larger amount of combinations and failure modes. In the next section, we conclude this paper and provide an outlook for future work. This sophisticated view on execution times supports a trade-off analysis in terms of safety and overhead in time for fault tolerance mechanisms. Since the number of possible combinations may be problematic for a manual approach, we present in section 4 an approach that automatically deduces execution times and corresponding failure modes.
In our future work we concentrate on extending the approach to generic mechanisms, respectively to parallel redundancy. Furthermore, we want to be able to automatically deduce the execution behavior from a system model. This reduces the effort for applying the here presented methodology and also allows the use of WCET tools that are able to calculate tight upper bounds.
Broster, A. Burns und G. Probabilistic analysis of CAN with faults. Real-Time Systems Symposium, RTSS Bericht, Burns und S. Predicting computation time for advanced processor architec- tures. Real-Time Systems, Euromicro RTS Statistical analysis of WCET for scheduling. Real-Time Sys- tems Symposium, Briones, J. Silva und A. Integration of safety analysis in model-driven software development. Software, IET, 2 3 —, June Ferdinand und R. In Building the Information Society, Jgg. Ganesh und J. International Standard IEC , Joshi, M.
Heimdahl, M. Steven und M. Model-Based Safety Anal- ysis, Mani Krishna. Fault-Tolerant Systems. Morgan Kaufmann, San Francisco, A new component concept for fault trees. Australian Computer Society, Inc. In Charlotte Seidner, Hrsg. Nolte, H. Hansson und C. Probabilistic worst-case response-time analy- sis for the controller area network. Papadopoulos und M. Simulink Models. International Conference on Dependable Systems and Networks, The worst-case execution-time problem—overview of methods and survey of tools. ACM Trans. Natural language however is abiguous. RSL is able to express require- ments from multiple aspects e.
TADL is the comming standard for handling timing information in the automotive domain. They build an interface between different groups of people, i. Typically requirements are stated as natural language text. The disadvantage is that ambiguities can cause a huge amount of requirement changes in all development phases. This leads to higher development costs and may also be unacceptable if the product is deployed in safety critical environments. It however does not only require some training to write requirements in such formal languages, it may also be hard to read them. RSL patterns consist of static text elements and attributes being instantiated by the requirements engineer.
RSL allows writing of natural sounding requirements while being expressive enough to formalize complex requirements. RSL aims at providing expression of a large scale of requirement domains. This includes recurring activations, jitter and delay. Though not illustrated in this paper, there already exists a compiler for translating instances of the patterns into observer automata. That cannot be done using such approaches because they usually do not provide a typed system. RSL has been extended and evolved in that it is more consistent and modular.
Section 4 concludes the paper. A functional pattern for example describing causality between two events looks like this: whenever event1 occurs event2 occurs [during interval] Phrases in square brackets are optional, bold elements are static keywords of the pattern, and elements printed in italics represent attributes that have to be instantiated by the re- quirements engineer. The names of the attributes can be used later to be mapped to a given architecture, or they can be used to generate the architecture.
When specifying requirements there is no strict typing like int or real but there are cat- egories that specify what is described by the different attributes. To not complicate the language too much and to be also expressive enough, the set of supported attributes has to be carefully chosen: Events represent activities in the system.
An event can be for example a signal, a user input by a pressed button, or a computational result. Events occur at distinct time instants and have no duration. Conditions can be used in two different ways. Firstly, they can be used as pseudo events to trigger an action if the condition becomes true. Intervals describe a continuous fragment of time whose boundaries are relative time measures, or events. Components refer to entities that are able to handle events or condition variables. Typi- cally, components refer to an architecture. RSL supports various combinations of elements.
That is, wherever an event occurs in a pattern, one may also use event during [ interval ], or event under condition. Thus, one can e. Filtering of events can be applied recursively. Projection is naturally extended to languages over timed traces. Pattern F1: whenever event1 occurs event2 [does not] occur[s] [during interval]. For mapping TADL constraints only boundaries in terms of timed values are needed. Throughout this paper we make use of the following notations.
For example, for the F1 pattern instance whenever s occurs r occur during [l,u]. Multiple activations of the pattern may run in parallel. Note that the attribute once can only be used in combination with an in- terval as otherwise the pattern would not terminate. Pattern R1: event occurs sporadic with minperiod period1 [and maxperiod period2 ] [and jitter jitter].
The op- tional period2 bounds the maximum inter-arrival time for subsequent occurrences of the reference event. For the R1 pattern instance e occurs sporadic with minperiod T1 and maxperiod T2 and jitter J. In the following we restrict to the case where only the last event may be bounded by an additional interval. For example, the semantics of the F 1 pattern instantiation whenever s occurs e and then f and then r during [l , u ] occur during [l,u]. The syntax is: N times event. This is equivalent to constructing the parallel composition of the constraints i pi and then to check the system against it.
That is the semantics of a constraint is given by its satisfaction-relation. The semantics of an event is the same as in RSL. An event chain relates two sets of events, stimulus and response. There is an intuition behind an event chain. The parameters of each kind of delay constraint are given by a subset the following elements: A set S of events acting as stimuli, a set R of events acting as responses, a time offset l indicating the near edge of a time window, a time offset u indicating the far edge, and the size w of a sliding window.
For the remaining constraints we will thus not cite the TADL semantics and directly give the timed language. The lower and upper bound l and u, respectively, are the same for all these F 1 patterns. The lower and upper bound l and u, respectively, are the same for all theses F 1 patterns.
TADL includes four kinds of repetition rate constraints, namely: periodic, sporadic, pattern and arbitrary. However it can NOT be instantiated directly. Generic repetition rate constraint A generic repetition rate constraint is parametrized by the following elements: The event e, whose occurrences are constrained, a lower bound l on the time-distance between occurrences of e, an upper bound u on the time-distance between occurrences of e, the deviation J from an ideal point in time, at which the event e is expected to occur and the actual time it occurs and a count SP indicating whether it is subsequent occurrences or occurrences farther apart that are constrained.
That semantics can be expressed in RSL by instantiating the following R1 pattern: e occurs sporadic with minperiod l and maxperiod u and jitter J. It can be expressed by the following F 1 patterns: whenever e occurs SP times e occur during [0,u], once. The language provides patterns for various aspects like safety, functional and timing requirements. References [AD94] R. Alur and D. A Theory of Timed Automata. Behrmann, A. David, and K. A Tutorial on Uppaal.
Formal Methods in System Design, 19 1 —80, July Dierks and M. Fuchs and R. Gafni, A. Benveniste, B. Caillaud, S. Graph, and B. Hull, K. Jackson, and J. Requirements Engineering. Springer, 2 edition, Published by request: and now, Hopkins, A. Internet Anthology Mandie Books Pack, vols. Acappella music arranged for 2 part male choir or duet. Hamilton, Mr.
Madison, and Mr. With 8 plates by Marie Laurencin. Vera D. Dk Readers. Erickson, M. Imray Chart Trotter: Niger in Paine and Hopkins : together Recipes with Cattitude!
- Verwandte Artikel:.
- Camélia.came (Thrillers) (French Edition).
- zuqyrobuny.tk Ebooks and Manuals;
- innovation planning - German translation – Linguee.
- The Second Enlightener?
Band 6. Done That. Try This! Wherein the sacred text is at large recited, the sense explained, and the