Accepted Tutorials
T01: Automated Testing of GUI Applications: Models, Tools, and Controlling Flakiness
T02: Build Your Own Model Checker in One Month
T03: Data Science for Software Engineering
T04: Software Analytics: Achievements and Challenges
T05: Developing Verified Programs with Dafny
T06: Software Engineering in the Age of Data Privacy: What and How the Global IT Community Can Share and Learn (Cancelled)
T07: Software Metrics: Pitfalls and Best Practices
T08: A Hands-On Java PathFinder Tutorial
T09: Specifying Effective Non-functional Requirements
T10: Pragmatic Pricing and Collaboration in Agile Software Development Projects (Cancelled)
T11: Efficient Quality Assurance of Variability-Intensive Systems
T12: The Company Approach to Teaching Software Engineering Project Courses (Cancelled)
T13: Software Requirement Patterns
T14: Essence: Kernel and Language for Software Engineering Practices (Cancelled)
T01: Automated Testing of GUI Applications: Models, Tools, and Controlling Flakiness
MON, May 20, 8:30 AM – 12:30 PM
Pacific Concourse H
Atif Memon and Myra B. Cohen
(University of Maryland, USA; University of Nebraska-Lincoln, USA)
System testing of applications with graphical interfaces such as web browsers, desktop or mobile apps, is more complex than testing from the command-line. Specialized tools are needed to generate and run test cases, models are needed to quantify behavioral coverage, and changes in the environment (such as OS, VM or system load) as well as starting states of the executions will impact the repeatability of the outcome of tests, making them appear flaky. In this tutorial, we present an overview of the state-of-the-art in GUI testing, followed by a demonstration on various platforms (desktop, web and mobile apps), using an open source testing tool, GUITAR. We show how to setup a system under test, how to extract models without source code, and how to then use those models to generate and replay test cases. We then will present a lecture on the various factors that may cause flakiness in execution of GUI-based software, and hence the results of analyses/experiments based on such software.
Over the last 10 years, we have been developing techniques and tools for automated GUI testing, as well as developed benchmarks and artifacts for experimentation. We will use these tools and the COMET benchmarking website (http://comet.unl.edu) to demonstrate ideas in this tutorial. We expect that this tutorial will be beneficial to both researchers and students who want to develop techniques for testing GUI- and web-based software, and to practitioners from Industry who run and rerun GUI software and often find their runs are flaky.
Biography: Atif Memon is an Associate Professor in the Department of Computer Science, University of Maryland, where he founded and heads the Event Driven Software Lab (EDSL). He designed and developed the model-based GUI testing software GUITAR. He has published over 100 research articles on the topic of event driven systems, software testing, and software engineering. He is the founder of TESTBEDS. He also helped develop the workshop on Experimental Evaluation of Software and Systems in Computer Science (EVALUATE).
Biography: Myra Cohen is an Associate Professor in the Department of Computer Science and Engineering at the University of Nebraska-Lincoln where she is a member of the Laboratory for Empirically based Software Quality Research and Development (ESQuaReD). She is a recipient of a National Science Foundation early CAREER development award and an Air Force Office of Scientific Research young investigator program award. Her research expertise is in testing highly configurable software, GUI testing, applications of combinatorial designs, and search based software engineering.
T02: Build Your Own Model Checker in One Month
May 20, 8:30AM—12:30PM
Pacific Concourse I
Jin Song Dong, Jun Sun, and Yang Liu
(National University of Singapore, Singapore; Singapore University of Technology and Design, Singapore; Nanyang Technological University, Singapore)
Model checking has established as an effective method for automatic system analysis and verification. It is making its way into many domains and methodologies. Applying model checking techniques to a new domain (which probably has its own dedicated modeling language) is, however, far from trivial. Translation-based approach works by translating domain specific languages into input languages of a model checker. Because the model checker is not designed for the domain (or equivalently, the language), translation-based approach is often ad hoc. Ideally, it is desirable to have an optimized model checker for each application domain. Implementing one with reasonable efficiency, however, requires years of dedicated efforts.
In this tutorial, we will briefly survey a variety of model checking techniques. Then we will show how to develop a model checker for a language combining real-time and probabilistic features using the PAT (Process Analysis Toolkit) step-by-step, and show that it could take as short as a few weeks todevelop your own model checker with reasonable efficiency. The PAT system is designed to facilitate development of customized model checkers. It has an extensible and modularized architecture to support new languages (and their operational semantics), new state reduction or abstraction techniques, new model checking algorithms, etc. Since its introduction 5 years ago, PAT has attracted more than 2000 registered users (from 500+ organisations in 55 countries) and has been applied to develop model checkers for 20 different languages.
Biography: Dr. Jin Song DONG received Bachelor and PhD degrees in computing from the University of Queensland in 1992 and 1996. From 1995-1998, he was a Research Scientist at the Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia. Since 1998 he has been in the Computer Science Department at the National University of Singapore (NUS) where he is currently an Associate Professor. Jin Song’s research is in areas of software engineering and formal methods. He is on the editorial board of Formal Aspects of Computing and Innovations in Systems and Software Engineering. Jin song is steering committee member of the International Conference on Formal Engineering Methods (ICFEM) and the Asia Pacific Software Engineering Conference (APSEC). He will be the General Chair for the 19th FM 2014. Some tutorials conducted by Jin Song and his colleagues are listed below:
- Half-day tutorial at Formal Methods 2011 (FM’11): “Build Your Own Model Checker in One Month” (co-presenters: Jun Sun and Yang Liu)
- Half-day tutorial at Formal Methods 2005 (FM’05): “Modeling Languages Spectrum: From Web Ontology to Behavior Specifications” (co-presenter: Dr. RogerDuke)
- Full-day tutorial at 26th International Conference on Software Engineering (ICSE’04): “Software Modeling Techniques and the Semantic Web”
- Half-day tutorial at 12th International Symposium on Formal Methods Europe (FM’03): “Semantic Web and Formal Methods”
- Full-day tutorial at 23th International Conference on Software Engineering (ICSE’01): “Integrated Formal Modeling Techniques and UML”
Biography: Dr. Jun SUN received Bachelor and PhD degrees in computing science from National University of Singapore (NUS) in 2002 and 2006. In 2007, he received the LEE KUAN YEW postdoctoral fellowship at NUS. In 2010, he joined Singapore University of Technology and Design as an Assistant Professor and spent one year at Visiting Scholar at MIT in 2011. Jun’s research is in areas of software engineering and formal methods, in particular, formal specification, formal verification and formal synthesis. He has more than 70 publications, including articles in IEEE Trans. on SE and ACM Trans. on SE as well as papers in CAV, ICSE and FM. Jun will be the Program Co-Chair of the 19th FM 2014.
Biography: Dr Liu Yang graduated in 2005 with a Bachelor of Computing (Honours) in the National University of Singapore (NUS). In 2010, he obtained his PhD and started his post doctoral work in MIT and SUTD. In 2011, Dr Liu is awarded the Temasek Research Fellowship at NUS to be the Principal Investigator (over 1.1 million SGD project) in the area of Cyber Security. In 2012 fall, he joined School of Computer Engineering, Nanyang Technological University as an assistant professor. Dr. Liu specializes in software verification using model checking techniques. His research has bridged the gap between the theory and practical usage of formal methods to evaluate the design and implementation of software for high assurance. To date, he has more than 60 publications, including articles in TOSEM, TSE, ICSE, FSE, CAV and FM. He is the Program Co-Chair for a number of conferences and workshops including ICECCS 2013 and PRDC 2014.
T03: Data Science for Software Engineering
MON, May 20, 8:30AM—12:30PM
Pacific Concourse J
Tim Menzies, Ekrem Kocaguneli, Burak Turhan, Leandro L. Minku, and Fayola Peters
(West Virginia University, USA; University of Oulu, Finland; University of Birmingham, UK)
Target audience: Software practitioners and researchers wanting to understand the state of the art in using data mining for software engineering (SE).
Content: In the age of big data, data science is an essential skill that should be equipped by software engineers. It can be used to predict useful information on new projects based on completed projects. This tutorial offers core insights about the state-of-the-art in this important field.
What participants will learn:
- Before data mining, this tutorial discusses the tasks needed to deploy learners to organizations.
- During data mining: from discretization to clustering to dichotomization and statistical analysis.
- And the rest:
- When local data is scarce, we show how to adapt data from other organizations to local problems.
- When privacy concerns block access, we show how to privatize data while still being able to mine it.
- When working with data of dubious quality, we show how to prune spurious information.
- When data or models seem too complex, we show how to simplify data mining results.
- When data is too scarce to support intricate models, we show methods for generating predictions.
- When the world changes, and old models need to be updated, we show how to handle those updates.
- When the effect is too complex for one model, we show how to reason across ensembles of models.
Pre-requisites: This tutorial makes minimal use of maths of advanced algorithms and would be understandable by developers and technical managers.
Biography: Tim Menzies is a full Professor at WVU. His experience in data analysis is extensive. He is the author of 200+ refereed publications and one of the co-founders of the PROMISE repository for repeatable SE experiments. Since, 2001 he has been one of the leading proponents of applying data mining to software engineering data. His paper on data mining SE data in TSE’07 is the highest cited paper in that journal for 2007 to 2012 [Men07]. He is the inventor of two new data mining algorithms (TAR3 and KEYS2) and is one of the co-organizers of the PROMISE conference on data mining SE data. He has organized workshops for ICSE 1999; ICSE 2005; ICSE 2007, ICSE 2012 and co-located conferences for ICSE 2008; ICSE 2009. He was the PC co-chair for ASE 12 and is a member of the editorial boards of IEEE TSE, ESE, JLVC, ASE journal. He organized all the PROMISE conference meetings from 2005 to 2011. He has organized special issues for journals for the Empirical SE journal (five times), the Requirements Engineering journal, IEEE Intelligent Systems, and twice for the Journal of Human Computer Studies. As a result of all the above, Dr. Menzies has an extensive collection of contacts in the international scientific community.
Biography: Burak Turhan is a Postdoctoral Research Fellow at the Department of Information Processing Science, University of Oulu (Finland). His research interests include empirical studies of software engineering on software quality, defect prediction, cost estimation, as well as data mining for software engineering. He has published over 50 peerreviewed articles on these topics, including one of the top five most cited papers in the Empirical Software Engineering journal since 2009, where he investigated the feasibility of cross company defect predictors with a novel filtering technique [Turhan09]. He has recently been awarded a 3-year research grant by the Academy of Finland on applying data science concepts to learn defect predictors across projects [Turhan12a, Turhan12b]. His research activities are in close collaboration with industrial partners, e.g.: Nokia, Ericsson, Elektrobit, Turkcell, IBM Canada. He is a steering committee member and the PC Chair for PROMISE’13, and on the editorial board of e-Informatica Software Engineering Journal (EISEJ).
Biography: Leandro L. Minku is a Research Fellow at the Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA), School of Computer Science, the University of Birmingham (UK). He received his PhD degree in Computer Science from the University of Birmingham (UK) in 2011, and was an intern at Google Zurich for six months in 2009/2010. He was the recipient of the Overseas Research Students Award (ORSAS) from the British government and several scholarships from the Brazilian Council for Scientific and Technological Development (CNPq). His research focuses on software prediction models, online/incremental machine learning for changing environments, and ensembles of learning machines. Along with Xin Yao, he is the author of the first approach able to improve the performance of software predictors based on cross-company data over single-company data by taking into account the changeability of software prediction tasks' environments.
Biography: Ekrem Kocaguneli is a Ph.D. candidate at the Lane Department of Computer Science and Electrical Engineering, West Virginia University. His research focuses on empirical software engineering, data/model problems associated with software estimation and tackling them with smarter machine learning algorithms. His research provided solutions to industry partners like Turkcell, IBTech (subsidiary of Greece National Bank), also he recently completed an internship at Microsoft Research Redmond. His work was published at IEEE TSE, ESE and ASE journals. Biography: Fayola Peters is a Ph.D. candidate at the Lane Department of Computer Science and Electrical Engineering, West Virginia University. Along with Grechanik, she is the author of one of the two known algorithms (presented at ICSE’12) that can privatize algorithms while still preserving the data mining properties of that data.
T04: Software Analytics: Achievements and Challenges
MON, May 20, 2:00PM—6:00PM
Pacific Concourse J
Dongmei Zhang and Tao Xie
(Microsoft Research, China; North Carolina State University, USA)
A huge wealth of various data exist in the practice of software development. Further rich data are produced by modern software and services in operation, many of which tend to be data-driven and/or data-producing in nature. Hidden in the data is information about the quality of software and services and the dynamics of software development. Software analytics is to develop and apply data exploration and analysis technologies, such as pattern recognition, machine learning, and information visualization, on software data to obtain insightful and actionable information for modern software and services. This 90-minute tutorial presents achievements and challenges of research and practice on principles, techniques, and applications of software analytics, highlighting success stories in industry, research achievements that are transferred to industrial practice, and future research and practice directions in software analytics. Attendees will acquire the skills and knowledge needed to perform research or conduct practice in the field of software analytics and to integrate analytics in their own research, practice, or teaching.
Biography: Dr. Dongmei Zhang is a Principal Researcher of Microsoft Research Asia (MSRA). She is also the research manager of the Software Analytics group at MSRA. Her research interests include data-driven software analysis, machine learning, information visualization, and large-scale computing platform. She founded the Software Analytics group at MSRA in 2009. Since then she has been leading the group to research and develop innovative data exploration and analysis technologies to help improve the quality of software and services as well as the software development productivity. Her group collaborates closely with multiple product teams in Microsoft, and has developed and deployed software analytics tools, which have created high business impacts and successfully been transferred to product teams. Prior to 2009, Dr. Zhang was the research manager of the User Interface Group in MSRA focusing on Digital Ink research. The handwriting recognition technologies for East Asian languages from her group were released in Windows 7. Her group also researched and developed handwriting math equation recognition technology, and shipped it in the Education Pack for Windows XP Tablet PC Edition in 2005. Prior to joining MSRA in 2004, Dr. Zhang had worked in the Digital Media Division at Microsoft headquarters since 2001. She played a key role in developing the award-winning Microsoft PhotoStory product, which was released in the Microsoft Plus! Digital Media Edition for Windows XP, Digital Image Pro 10 and PhotoStory 3 for Genuine Windows. Prior to joining Microsoft, Dr. Zhang developed key middle-tier components for e-Commerce products in Alventive Inc. Dr.Zhang received her Ph.D. degree from the Robotics Institute at Carnegie Mellon University. She received her M.E. and B.E. from Tsinghua University. She co-presented (1) a mini-tutorial on Software Analytics in Practice with Tao Xie, 1.5 hours at ICSE 2012, (2) a tutorial on eXtreme Software Analytics, with Tao Xie, 3 hours, at ASE 2011, (3) a tutorial on Teaching and Training for Software Analytics with Yingnong Dang, Shi Han, and Tao Xie, 3 hours at CSEE&T 2012.
Biography: Tao Xie is an Associate Professor in the Department of Computer Science at North Carolina State University since 2005. He received his Ph.D. in Computer Science from the University of Washington at Seattle in 2005, under David Notkin’s supervision. He worked as a visiting researcher at Microsoft Research Redmond and Microsoft Research Asia. His research interests are in software engineering, with a focus on improving software reliability and dependability, including software testing and analysis and software analytics. He leads the Automated Software Engineering Research Group at North Carolina State University. He has published widely in major software engineering journals, conferences, and workshops. He is an ACM Distinguished Speaker and an IEEE Computer Society Distinguished Visitor. He received an NSF CAREER Award in 2009. He received a 2011 Microsoft Research SEIF Award, 2008, 2009, and 2010 IBM Faculty Awards, and a 2008 IBM Jazz Innovation Award. He received the ASE 2009 Best Paper Award and an ACM SIGSOFT Distinguished Paper Award. He was Program Co-Chair of ICSM 2009 and MSR 2011 and 2012. He co-organized a Dagstuhl Seminar on Mining Programs and Processes in 2007, and a Dagstuhl Seminar on Practical Software Testing: Tool Automation and Human Factors in 2010. He co-presented (1) a mini-tutorial on Software Analytics in Practice with Dongmei Zhang, 1.5 hours at ICSE 2012, (2) a tutorial on eXtreme Software Analytics, with Dongmei Zhang, 3 hours, at ASE 2011, (3) a tutorial on Teaching and Training for Software Analytics with Yingnong Dang, Shi Han, and Dongmei Zhang, 3 hours at CSEE&T 2012, (4) a technical briefing on Mining Software Engineering Data, with Ahmed E. Hassan, 1.5 hours, at ICSE 2011, (5) tutorials on Mining Software Engineering Data, with Ahmed E. Hassan, 3 hours, at ICSE 2012, ICSE 2010, ICSE 2009, ICSE 2008, and ICSE 2007, (6) a tutorial on Mining for Software Reliability with Chao Liu and Jiawei Han, 5 hours, at ICDM 2007, (7) a tutorial on Data Mining for Software Engineering, with Jian Pei, 3 hours, at KDD 2006, (8) tutorials on Parameterized Unit Testing: Principles, Techniques, and Applications in Practice with Nikolai Tillmann and Jonathan de Halleux at ICSE 2010 and ICSE 2009.
T05: Developing Verified Programs with Dafny
MON, May 20, 2:00PM – 6:00PM
Pacific Concourse H
Rustan Leino
(Microsoft Research, USA)
Preprint Available
Reasoning about programs is a fundamental skill that every software engineer needs. This tutorial provides participants an opportunity to get hands-on experience with Dafny, a tool that can help develop this skill. Dafny is a programming language and state-of-the-art program verifier. The language is type-safe and sequential, and it includes common imperative features, dynamic object allocation, and datatypes. It also includes specification constructs like pre- and postconditions, which let a programmer record the intended behavior of the program along with the executable code that is supposed to cause that behavior. What sets Dafny apart from other programming systems is that it runs its verifier continuously in the background, and thus the consistency of a program and its specifications is always enforced. This tutorial gives a taste of how to use Dafny in program development. This includes an overview of Dafny, basics of writing specifications, how to debug verification attempts, and how to formulate and prove lemmas. The tutorial is geared toward software engineers, students, and educators. The participants are expected to have programming experience.
Biography: Rustan Leino is a Principal Researcher in the Research in Software Engineering (RiSE) group at Microsoft Research. He is known for his work on programming methods and program verification tools. These include the languages and tools Dafny, Chalice, Jennisys, Spec#, Boogie, Houdini, ESC/Java, and ESC/Modula-3. With Dafny, his mission is not just to provide a tool that helps teach programmers to reason about programs, but also to provide a vision for the kind of automatic reasoning support that all future programming environments may provide. Prior to Microsoft Research, Leino worked at DEC/Compaq SRC. He received his PhD from Caltech (1995), before which he designed and wrote object-oriented software as a technical lead in the Windows NT group at Microsoft. Leino collects thinking puzzles on a popular web page and hosts the Verification Corner video show on channel9.msdn.com. In his spare time, he plays music and, recently having ended his tenure as cardio exercise class instructor, is trying to learn to dance.
T06: Software Engineering in the Age of Data Privacy: What and How the Global IT Community Can Share and Learn
Mark Grechanik, Fayola Peters, Denys Poshyvanyk, and Tim Menzies
(University of Illinois at Chicago, USA; West Virginia University, USA; College of William and Mary, USA)
Cancelled
T07: Software Metrics: Pitfalls and Best Practices
TUE, May 21, 8:30AM—12:30PM
Pacific Concourse I
Eric Bouwers, Arie van Deursen, and Joost Visser
(Software Improvement Group, Netherlands; TU Delft, Netherlands)
Using software metrics to keep track of the progress and quality of products and processes is a common practice in industry. Additionally, designing, validating and improving metrics is an important research area. Although using software metrics can help in reaching goals, the effects of using metrics incorrectly can be devastating. In this tutorial we leverage 10 years of metrics-based risk assessment experience to illustrate the benefits of software metrics, discuss different types of metrics and explain typical usage scenario’s. Additionally, we explore various ways in which metrics can be interpreted using examples solicited from participants and practical assignments based on industry cases. During this process we will present the four common pitfalls of using software metrics. In particular, we explain why metrics should be placed in a context in order to maximize their benefits. A methodology based on benchmarking to provide such a context is discussed and illustrated by a model designed to quantify the technical quality of a software system. Examples of applying this model in industry are given and challenges involved in interpreting such a model are discussed. This tutorial provides an in-depth overview of the benefits and challenges involved in applying software metrics. At the end you will have all the information you need to use, develop and evaluate metrics constructively.
Biography: Joost Visser is head of research at the Software Improvement Group (SIG) in Amsterdam, The Netherlands, and holds a position as professor of large-scale software systems at the Radboud University Nijmegen, The Netherlands. Within SIG, Joost is responsible for innovation of tools and services, academic relations, internship coordination, and general research. In the past seven years he has been involved in the development, evaluation and application of a benchmark-based model for software quality.
Biography: Arie van Deursen is a full professor in software engineering at Delft University of Technology, The Netherlands, where he leads the Software Engineering Research Group. His research topics include software testing, software architecture, and collaborative software development, with a strong focus on the empirical evaluation and application of this research.
Biography: Eric Bouwers is a qualified teacher, technical consultant at the Software Improvement Group in Amsterdam, The Netherlands and a part-time Ph.D. student at Delft University of Technology. He is interested in how software metrics can assist in quantifying the architectural aspects of software quality. In the past five years this interest has led to the design, evaluation and application of two architecture level metrics that are now embedded in a benchmark-based model for software quality.
T08: A Hands-On Java PathFinder Tutorial
TUE, May 21, 8:30AM—12:30PM
Pacific Concourse H
Peter Mehlitz, Neha Rungta, and Willem Visser
(SGT, USA; NASA Ames Research Center, USA; Stellenbosch University, South Africa)
Java Pathfinder (JPF) is an open source analysis system that automatically verifies Java programs. The JPF tutorial provides an opportunity to software engineering researchers and practitioners to learn about JPF, be able to install and run JPF, and understand the concepts required to extend JPF. The hands-on tutorial will expose the attendees to the basic architecture framework of JPF, demonstrate the ways to use it for analyzing their artifacts, and illustrate how they can extend JPF to implement their own analyses. One of the defining qualities of JPF is its \emph{extensibility}. JPF has been extended to support symbolic execution, directed automated random testing, different choice generation, configurable state abstractions, various heuristics for enabling bug detection, configurable search strategies, checking temporal properties and many more. JPF supports these extensions at the design level through a set of stable well defined interfaces. The interfaces are designed to not require changes to the core, yet enable the development of various JPF extensions. In this tutorial we provide attendees a hands on experience of developing different interfaces in order to extend JPF. The tutorial is targeted toward a general software engineering audience--software engineering researchers and practitioners. The attendees need to have a good understanding of the Java programming language and be fairly comfortable with Java program development. The attendees are not required to have any background in Java Pathfinder, software model checking or any other formal verification techniques. The tutorial will be self-contained.
Biography: Peter C. Mehlitz is a member of the Robust Software Engineering Group at the NASA Ames Research Center. He designed and currently maintains the JavaPathfinder (JPF) verification system core, and had a pivotal role in its open sourcing and the successive formation of the JPF user community. In this role, he has led and advised numerous collaborations with academia and industry, and has given presentations at annual JPF workshops, tutorials and summer school classes. Mr. Mehlitz has more than 30 years of experience in developing large scale software systems and is a strong open source protagonist. His research interests include the use of virtual machines for high reliability applications, radiation tolerant software, and software design patterns. Mr. Mehlitz holds a MS in Aeronautical and Astronautical Engineering from the University of the Federal Armed Forces, Munich, 1984.
Biography: Dr. Neha Rungta is currently a researcher in the Robust Software Engineering Group at the NASA Ames Research Center. Dr. Rungta’s research has been geared toward developing verification techniques for automated test case generation, detection of subtle concurrency errors, and incremental program analysis. Dr. Rungta actively maintains and develops several extensions to Java Pathfinder with respect to these research topics. Dr. Rungta has published in several top-tier peer-reviewed venues in those fields. Dr. Rungta holds a PhD in Computer Science from Brigham Young University.
Biography: Dr. Willem Visser is a full Professor of Computer Science at Stellenbosch University, South Africa. Earlier, Dr. Visser was part of the Research Institute for Advanced Computer Science (RIACS) in the United States and worked for the Robust Software Engineering group at NASA Ames Research Center. During this time he was the main research lead for the Java PathFinder (JPF) project. Dr. Visser also has experience working in leading startups in the areas of software testing and stability. Dr. Visser has served as program committee member for internationally distinguished conferences including the International Symposium on Formal Methods (FM), Symposium on Foundations of Software Engineering (FSE) and International Conference on Software Engineering (ICSE). Apart from the implementation of the JPF tool, he has published over 50 papers in the areas of formal verification and software reliability to the computer science literature. Dr. Visser holds a PhD from the University of Manchester, 1998.
T09: Specifying Effective Non-functional Requirements
TUE, May 21, 2:00PM—6:00PM
Pacific Concourse H
John Terzakis
(Intel, USA)
Non-functional (quality and performance) requirements present unique challenges for requirements authors, reviewers and testers. They often begin as ambiguous concepts such as “The software must be easy to install” or “The software must be intuitive and respond quickly”. As written, these requirements are not testable because they are subjective in nature. The definitions of words like “easy”, “intuitive” and “quickly” are open to interpretation. One person’s “easy” could be another person’s “difficult”. In order to be testable, non-functional requirements need to be quantifiable and measurable. This tutorial introduces the concept of Planguage, which facilitates the development of effective, testable non-functional requirements. Writing effective non-functional requirements involves removing subjectivity and replacing it with well-defined testing parameters. Subjectivity is removed by eliminating weak words, ambiguity and unbounded lists. Well-defined testing parameters include a scale (unit of measure), meter (device or process used to determine the position on a scale) and the Landing Zone (or range of success). With the non-functional requirement now rewritten in quantifiable terms, the testing space is bounded and the requirement becomes verifiable. This tutorial first presents definitions and examples of functional and non-functional requirements. It then shows examples of non-functional requirements that are not testable and analyzes the issues in each one. Next, the concept of Planguage is introduced, along with an explanation of how its keywords improve testability. Finally, this tutorial revisits each of non-testable, non-functional requirements examples and rewrites them to be testable. Students will be able to begin applying the concepts in this tutorial immediately.
Biography: John Terzakis has over 25 years of experience in developing, writing and testing software. With Intel for 13 years, he is currently a Staff Engineer working with software planning and development teams on enhancing product requirements in order to decrease planning & development times, to reduce defects and to improve overall product quality. He is a certified Intel instructor for Requirements Engineering courses. His prior experience includes Director and Manager roles with Shiva, Racal InterLan and Dataproducts. He was also a Member of the Technical Staff at Bell Labs. John is a Fellow with the IARIA (International Academy, Research and Industry Association). He has presented tutorials, sessions and papers at the Better Software Conference West (2007, 2009-2012), IEEE International Requirements Engineering Conference (2011), ICCGI Conference (2010 and 2012), Project Summit & Business Analyst World Conference (2011), and International Institute of Business Analysis (IIBA) Hartford, CT chapter meeting (2012). His article, “Virtual Retrospectives for Geographically Dispersed Software Teams”, was published in the May/June 2011 issue of the IEEE Software Magazine. John holds a MS EE from Stanford University and BS EE from Northeastern University.
T10: Pragmatic Pricing and Collaboration in Agile Software Development Projects
Matthias Book, Simon Grapenthin, and Volker Gruhn
(University of Duisburg-Essen, Germany)
Cancelled
T11: Efficient Quality Assurance of Variability-Intensive Systems
SAT, May 25, 8:30AM—12:30PM
Boardroom A
Patrick Heymans, Axel Legay, and Maxime Cordy
(University of Namur, Belgium; INRIA, France)
Variability is becoming an increasingly important concern in software development but techniques to cost-effectively verify and validate software in the presence of variability have yet to become widespread. This tutorial/briefing offers an overview of the state of the art in an emerging discipline at the crossroads of formal methods and software engineering: quality assurance of variability-intensive systems. We will present the most significant results obtained during the last four years or so, ranging from conceptual foundations to readily usable tools. Among the various quality assurance techniques, we focus on model checking, but we extend the discussion to other techniques such as testing. With its lightweight usage of mathematics and balance between theory and practice, this tutorial/briefing is designed to be accessible to a broad software engineering audience. Researchers working in the area, willing to join it, or simply curious, will get a comprehensive picture of the recent developments. Practitioners developing variability-intensive systems are invited to discover the capabilities of our techniques and tools and consider integrating them in their processes.
Biography: Dr. Patrick Heymans is full professor of SE at University of Namur (Belgium), and visiting researcher at INRIA/Univ. of Lille/CNRS (France). He is founding member and current director of the PReCISE research centre (55 researchers) where he leads the requirements engineering and software product line efforts. He has supervised 8 PhD theses and authored over 100 peer-reviewed publications. According to Google Scholar, he has an H-index of 24. He is a regular referee for top SE journals and conferences (RE, SPLC, ICSE, ESEC-FSE...), and associate editor of IEEE TSE. Patrick was the program chair of RE’11. He is principal investigator on various international SE research projects and regularly acts as an advisor and trainer for IT companies. Patrick’s research has been recognized through a number of awards including Distinguished Paper at RE’06 and Best Research Papers at RE’09 and RE’12. Patrick is one of the principal contributors to the line of work presented in this tutorial. He is co-author of papers published at ICSE’10 [1], ICSE’11 [2], ICSE’12 [3], RE’12 [4] (best paper award) as well as TSE [5] and STTT [6], on which this tutorial is based. Patrick presented this work at many occasions in the last few years, including keynote addresses at SPLC’12 and LMO’10, invited presentations at IFIP WG 2.9, Bits&Chips 2012 Embedded Systems conference (Netherlands), Politecnico di Milano and DePaul University (Chicago). Patrick possesses a 15-year experience in teaching, from under-graduate level to doctoral schools and professional training.
Recent tutorials:
Mathieu Acher, Patrick Heymans, Philippe Collet and Philippe Lahire. Next-Generation Model-based Variability Management: Languages and Tools. 15th International Conference on Model Driven Engineering Languages and Systems (MODELS’12), Innsbruck, Austria, September 2012.
Mathieu Acher, Rapha¨el Michel and Patrick Heymans. Next-Generation Model-based Variability Management: Languages and Tools. 16th International Software Product Lines Conference (SPLC’12), Salvador de Bahia, Brazil, September 2012.
Patrick Heymans, Visual Effectiveness of Modeling Notations. Invited tutorial at the yearly IFI Summer school, University of Zurich, Switzerland, June 2012.
Martin Mahaux and Patrick Heymans. Improvisational Theater for Information Systems: an Agile, Experience-Based, Prototyping Technique. 24th International Conference on Advanced Information Systems Engineering (CAiSE’12), Gdansk, Poland, June 2012.
Mathieu Acher, Rapha¨el Michel and Patrick Heymans. Next-Generation Model-based Variability Management: Languages and Tools. Conf´erence en Ing´enieriE du Logiciel (CIEL’12), Rennes, France, June 2012.
Martin Mahaux and Patrick Heymans. Improvisational Theater for Information Systems: Breathing Collaboration and Creativity into your Developments. Sixth International Conference on Research Challenges in Information Science (RCIS’12), Valencia, Spain, May 2012.
Biography: Dr. Axel Legay held positions at Univeristy of Li`ege and CMU (under the supervision of Ed Clarke). He is now fulltime researcher at INRIA where he leads the ESTASE team (8 researchers), and a part-time associate professor at University of Aalborg. His main research interests are in developing formal specification and verification techniques for SE. Axel is a founder and major contributor of statistical model checking (a statistical variant of model checking effectively used in industry). He supervised 3 PhD theses and authored more than 110 peer-reviewed publications. He is a referee for top journals and conferences in formal verification and simulation, and program co-chair of INFINITY’09, FIT’10, and Runtime Verification 2013. He is also workshop chair at ETAPS’14. He is principal investigator on 2 national and 3 European research projects. Over the past five years, Axel delivered tutorials on Statistical Model Checking at international conferences (e.g. QEST 2009), workshops, and doctoral schools (e.g., ARTIST winter school for embedded system). He also presented part of the material offered in this tutorial in various workshops and seminars around the world.
Biography: Maxime Cordy is FNRS research fellow at the University of Namur (Belgium). He is the author of 8 publications on the topic of this tutorial, including publications at ICSE’12 and RE’12 (best paper award). Together with Andreas Classen, Maxime implemented SNIP, the first complete model checking toolchain for the design and the verification of variabilityintensive systems. In this tutorial, Maxime will be mostly in charge of the tool demo.
T12: The Company Approach to Teaching Software Engineering Project Courses
David Broman
(UC Berkeley, USA)
Cancelled
T13: Software Requirement Patterns
SAT, May 25, 2:00PM—6:00PM
Boardroom A
Xavier Franch
(Universitat Politècnica de Catalunya, Spain)
Many recent studies still show how a significant percentage of software projects are out of budget, suffer delays or simply have to be cancelled. One of the most recognized causes for this scenario is the failure in producing a good set of software requirements. Methods for improving the quality of software requirement specifications are therefore needed. Pattern-based requirements engineering is one of such methods. The definition and use of a software requirements pattern catalogue supports the elicitation, validation, documentation and management of requirements. By designing an appropriate catalogue, an IT organization will have a starting point for the requirements engineering activity reducing the associated costs and producing better requirements. The tutorial will be organized into two parts. The first part will introduce the foundations on software requirement patterns: concept, metamodel, semantics, classification schemas and processes. The second part will put the theory into practice by building an excerpt of catalogue starting from selected requirements from specification documents. The tutorial is addressed to researchers, practitioners and educators in software engineering, especially requirements engineers. For researchers, an updated state of the art will be exposed, and the presentation will rely on scientific grounds. For practitioners, processes and templates will be shown and a successful case study of pattern-based requirements engineering will be analysed in detail. For educators, the tutorial will provide the basis for developing course material. A basic knowledge of the requirements engineering discipline, either from the research or the practice, is required.
Biography: Xavier Franch is Associate Professor at UPC, Spain. He has published >150 refereed papers in journals and international conferences like IEEE Software, IST, JSS, SPE, CSI, SoSyM, IJSEKE, RE, SAC, COMPSAC, ICSR, EASE, REFSQ, ECSA, SEKE, CAiSE, ER, ICCBSS, RCIS.
- Steering Committee membership: RE (from 2006); REFSQ (from 2010, CAiSE (from 2011).
- General Chair: RE 2008.
- Program co-Chair: REFSQ 2011, CAiSE 2012, ICCBSS 2006.
- Program Board membership: RE (2012-13), CAiSE (2011, 2013), RCIS (2013).
- Program Committee membership: >100 in conferences like RE, SAC, CBSE, REFSQ, SPLC, CAiSE, ER.
- Journal reviewer: regularly in SCI-indexed journals like TSE, TOSEM, IEEE Software, Computer, REJ, EMSE, IEE Proceedings, FGCS, etc. (up to >20).
- Editorial Board membership: Elsevier IST (from 2012), IJISMD (from 2010).
- Other relevant positions in conferences: Workshop cochair (CAiSE 2008, 2013). Doctoral Symposium cochair (RCIS, 2013). Proceedings co-chair (ICCBSS, 2005). Publicity Regional chair (RE, 2004).
- Special issues editor: ISJ (2013), REJ (2012) and JSS (2008).
- Workshop organization: RECOTS (at RE, 2003-04); MPEC (ICSE, 2004-05); SOCCER (RE, 2005-06); APLE (SPLC, 2006); iStar (IDEAS, 2008; CAiSE, 2010; RE, 2011); and CMM (CAiSE, 2011).
- Keynotes: IEEE RCIS 2012.
- Seminars at the following universities: City, Wien, Ottawa, Johannes Kepler, Sevilla, Valencia, NTNU, FBK, Wollongong, Toronto, Groningen.
- Other talks: invited at RE’04 (state-of-the-practice talk).
- Associate member of the International Requirements Engineering Board (IREB).
- Two awards related to the topic of the tutorial: Best Paper at RCIS 2009, Best Poster at RE 2011.
T14: Essence: Kernel and Language for Software Engineering Practices
Arne J. Berre, Ivar Jacobson, Shihong Huang, Mira Kajko-Mattsson, and Pan-Wei Ng
(SINTEF, Norway; Ivar Jacobson Int., Sweden; Florida Atlantic University, USA; KTH, Sweden; Ivar Jacobson Int., China)
Cancelled