DCMODELSWARD 2014 Abstracts

Short Papers
Paper Nr: 1

Secure Web Engineering Supported by an Evaluation Framework - Preliminary Report on a Web Engineering Approach for Secure Applications Supported by a Conceptual Evaluation Framework for Secure Systems Engineering


Marianne Busch

Abstract: Daily news tells that current web applications are often not secure enough, which is a threat to the user's privacy as well as to the image of companies. Our first aim is to make web applications more secure by taking security features into account at the very beginning of the SDLC. During the requirements and design phases, graphical or textual models can help to get an overview of a web application and its security features. Besides, models can be used for documentation purposes and security-related properties in models can be validated and transformed to artifacts for the implementation phase. We extend an existing modeling approach in a way that general security features as secure connections, authentication and access control on data structures can be represented. Additionally, we focus on specific security features, as e.g., access control on the navigation structure of a web application or automated reactions to denial-of-service attacks. Collecting and analyzing data of existing security engineering methods, notations and tools (called mechanisms) is of major importance for security and software engineers, as it helps them to take decisions about solutions for upcoming tasks. These tasks can be related to the design of web applications, but can as well go beyond web engineering. To ease the tasks of recording own results and of getting an overview of existing methods, notations and tools the Common Body of Knowledge (CBK) was implemented as a semantic Wiki within the scope of the EU project NESSoS. As we are members of this project, we gained experience working with the CBK and its underlying ontology and reflected on requirements for a conceptual evaluation framework. Our second aim is to provide an approach for the evaluation of methods, notations and tools for the engineering of secure software systems. Evaluation should also be possible for vulnerabilities, threats and security properties (e.g., integrity). The term "evaluation" covers the collection, analysis and finer-grained representation of (security-specific) knowledge. Another requirement is that the core framework is not overly detailed, but easy to extend.

Paper Nr: 2

A Collaborative Platform for Software Evolution Visualization - Leveraging Meta-model Driven Measurements with Big Data Strengths


João Carlos Caldeira

Abstract: This document describes a preliminary PhD thesis proposal that will hopefully lead to a collaborative framework and platform for Software Evolution Visualization (SEV). We sustain our decision to follow this research area by evaluating recent works which have shown that there is a need for multi-metrics, multi-perspective and multi-strategy approaches to SEV as summarized by (Novais, et al., 2013). The authors identify some research niches such as missing case studies, tool comparisons and experiments with the aim of predicting defects, improve software quality and development processes. Another missing aspect relates to the presentation of real scalable visualization and dependency impact among projects. It is also recognized that there is little formal validation and collaboration in this area, most likely because the data is scarce, dispersed and not widely shared by each individual researcher. The lack of empirical studies is a real constraint to allow the community to perform benchmarking and compare methodologies and results. In other words, the SEV community has failed to provide sound evidences, through empirical validation studies, of the impact of using the technology they created. In fact SEV research deliverables provide visual insights that are expected to help understand complex software artefacts and ultimately contribute to improve their quality and the maintenance process itself. The goals of our research consist on proposing a structured approach to (i) collect data from public domain software repositories, (ii) extract complexity and quality metrics using a meta-model driven measurement approach (M2DM), (iii) store and eventually transform those metrics by adopting big data technologies for scalability sake, (iv) visualize software evolution, along the corresponding metrics, in a collaborative fashion, allowing to identify patterns and trends. The aforementioned approach is expected to scaffold exploratory activities on top of the collected data, allowing the community to do benchmarking, evaluate software engineering best practices and assess software engineering research questions by means of empirical studies (Goulão, et al., 2012).