Generate CCS Codes CCS General and reference CCS Hardware CCS Computer systems organization CCS Networks CCS Software and its engineering CCS Theory of computation CCS Mathematics of computing CCS Information systems CCS Security and privacy CCS Human-centered computing CCS Computing methodologies CCS Applied computing CCS Social and professional topics CCS Proper nouns: People, technologies and companies CCS General and reference Document types CCS General and reference Cross-computing tools and techniques CCS Hardware Printed circuit boards CCS Hardware Communication hardware, interfaces and storage CCS Hardware Integrated circuits CCS Hardware Very large scale integration design CCS Hardware Power and energy CCS Hardware Electronic design automation CCS Hardware Hardware validation CCS Hardware Hardware test CCS Hardware Robustness CCS Hardware Emerging technologies CCS Computer systems organization Architectures CCS Computer systems organization Embedded and cyber-physical systems CCS Computer systems organization Real-time systems CCS Computer systems organization Dependable and fault-tolerant systems and networks CCS Networks Network architectures CCS Networks Network protocols CCS Networks Network components CCS Networks Network algorithms CCS Networks Network performance evaluation CCS Networks Network properties CCS Networks Network services CCS Networks Network types CCS Software and its engineering Software organization and properties CCS Software and its engineering Software notations and tools CCS Software and its engineering Software creation and management CCS Theory of computation Models of computation CCS Theory of computation Formal languages and automata theory CCS Theory of computation Computational complexity and cryptography CCS Theory of computation Logic CCS Theory of computation Design and analysis of algorithms CCS Theory of computation Randomness, geometry and discrete structures CCS Theory of computation Theory and algorithms for application domains CCS Theory of computation Semantics and reasoning CCS Mathematics of computing Discrete mathematics CCS Mathematics of computing Probability and statistics CCS Mathematics of computing Mathematical software CCS Mathematics of computing Information theory CCS Mathematics of computing Mathematical analysis CCS Mathematics of computing Continuous mathematics CCS Information systems Data management systems CCS Information systems Information storage systems CCS Information systems Information systems applications CCS Information systems World Wide Web CCS Information systems Information retrieval CCS Security and privacy Cryptography CCS Security and privacy Formal methods and theory of security CCS Security and privacy Security services CCS Security and privacy Intrusion/anomaly detection and malware mitigation CCS Security and privacy Security in hardware CCS Security and privacy Systems security CCS Security and privacy Network security CCS Security and privacy Database and storage security CCS Security and privacy Software and application security CCS Security and privacy Human and societal aspects of security and privacy CCS Human-centered computing Human computer interaction (HCI) CCS Human-centered computing Interaction design CCS Human-centered computing Collaborative and social computing CCS Human-centered computing Ubiquitous and mobile computing CCS Human-centered computing Visualization CCS Human-centered computing Accessibility CCS Computing methodologies Symbolic and algebraic manipulation CCS Computing methodologies Parallel computing methodologies CCS Computing methodologies Artificial intelligence CCS Computing methodologies Machine learning CCS Computing methodologies Modeling and simulation CCS Computing methodologies Computer graphics CCS Computing methodologies Distributed computing methodologies CCS Computing methodologies Concurrent computing methodologies CCS Applied computing Electronic commerce CCS Applied computing Enterprise computing CCS Applied computing Physical sciences and engineering CCS Applied computing Life and medical sciences CCS Applied computing Law, social and behavioral sciences CCS Applied computing Computer forensics CCS Applied computing Arts and humanities CCS Applied computing Computers in other domains CCS Applied computing Operations research CCS Applied computing Education CCS Applied computing Document management and text processing CCS Social and professional topics Professional topics CCS Social and professional topics Computing / technology policy CCS Social and professional topics User characteristics CCS Proper nouns: People, technologies and companies Companies CCS Proper nouns: People, technologies and companies Organizations CCS Proper nouns: People, technologies and companies People in computing CCS Proper nouns: People, technologies and companies Technologies CCS General and reference Document types Surveys and overviews Title Moving towards a socially-driven internet architectural design Abstract This paper provides an interdisciplinary perspective concerning the role of prosumers on future Internet design based on the current trend of Internet user empowerment. The paper debates the prosumer role, and addresses models to develop a symmetric Internet architecture and supply-chain based on the integration of social capital aspects. It has as goal to ignite the discussion concerning a socially-driven Internet architectural design. Title MM'10 workshop summary for SSPW: ACM workshop on social signal processing 2010 Abstract The Workshop on Social Signal Processing (SSPW) is the yearly event of the Social Signal Processing Network (EU-FP7 SSPNet project). This year's workshop programme consists of 4 premium Key Note Talks by Jeff Cohn, Alex Pentland. Justine Cassell, and Toyoaki Nishida, an oral session with 4 presentations, a poster session with 7 posters, and a panel session where the panelists will be the Key Note Speakers and the workshop organizers. Title A survey of context data distribution for mobile ubiquitous systems Abstract The capacity to gather and timely deliver to the service level any relevant information that can characterize the service-provisioning environment, such as computing resources/capabilities, physical device location, user preferences, and time constraints, usually defined as context-awareness, is widely recognized as a core function for the development of modern ubiquitous and mobile systems. Much work has been done to enable context-awareness and to ease the diffusion of context-aware services; at the same time, several middleware solutions have been designed to transparently implement context management and provisioning in the mobile system. However, to the best of our knowledge, an in-depth analysis of the context data distribution, namely, the function in charge of distributing context data to interested entities, is still missing. Starting from the core assumption that only effective and efficient context data distribution can pave the way to the deployment of truly context-aware services, this article aims at putting together current research efforts to derive an original and holistic view of the existing literature. We present a unified architectural model and a new taxonomy for context data distribution by considering and comparing a large number of solutions. Finally, based on our analysis, we draw some of the research challenges still unsolved and identify some possible directions for future work. Title Survey of state melding in virtual worlds Abstract The fundamental goal of virtual worlds is to provide users with the illusion that they are all seeing and interacting with each other in a consistent world. State melding is the core of creating this illusion of a shared reality. It includes two major parts: consistency maintenance and state update dissemination. Well-designed state melding technologies are also critical for developing a virtual world that can scale to a large number of concurrent users and provide satisfying user experiences. In this article, we present a taxonomy of consistency models and categorization of state update dissemination technologies for virtual worlds. To connect theories and practices, we then apply the taxonomy to case study several state-of-the-art virtual worlds. We also discuss challenges and promising solutions of state melding in large-scale virtual worlds. This survey aims to provide a thorough understanding of existing approaches and their strength and limitations and to assist in developing solutions to improve scalability and performance of virtual worlds. Title A survey of adaptive services to cope with dynamics in wireless self-organizing networks Abstract In this article, we consider different types of wireless networks that benefit from and, in certain cases, require self-organization. Taking mobile ad hoc, wireless sensor, wireless mesh, and delay-tolerant networks as examples of wireless self-organizing networks (WSONs), we identify that the common challenges these networks face are mainly due to lack of centralized management, device heterogeneity, unreliable wireless communication, mobility, resource constraints, or the need to support different traffic types. In this context, we survey several adaptive services proposed to handle these challenges. In particular, we group the adaptive services as core services and network-level services. By categorizing different types of services that handle adaptation and the types of adaptations, we intend to provide useful design guidelines for achieving self-organizing behavior in network protocols. Finally, we discuss open research problems to encourage the design of novel protocols for WSONs. Title Mobile communication for emerging Bangladesh: exploring the privacy risks for youth population Abstract Connectivity in social and economic spheres using mobile technology is a global phenomenon. This kind of communication is more significant for Southern Countries: a mean to develop, a way to come out of poverty, and a path towards an equitable society. This relationship between new communication technologies and society is complex, primarily due to their multidimensional effects in personal and social lives. Absence of a proper policy guideline and infrastructure to nurture the mobile communications' potentials can leave the users at risk of privacy violations in an increasingly flattened world. Youths of the developing societies, who have a very high representation in ICT usage hence face greater risks in terms of privacy violation and involuntary personal data commodification. This paper, as a part of a multi-year study, specifically looks into the vulnerability of Bangladeshi youth population using mobile devices, for voice and data communications. Bangladesh, one of the emerging economies in South Asia, has a thriving Telecom/ICT industry with an ever growing number of users, majority of them being young. Based on a nationwide representative survey, we have found the level of trust on existing mobile telephony to be significantly higher than the Internet across the country among all respondents, amid the absence of any clear privacy and security framework at the national level. This paper moreover shows that a significant number of the younger generation (both male and female) are unaware about the concept of privacy in the 'Digital Age' and also have little or no idea about possible risks involved with sharing any voice or data communication. Title Web analytics and metrics: a survey Abstract This is a survey paper which gives some different types of Web Analytics metrics and how the data is collected related to these metrics. As we know with the increasing need to meet customer preferences and to understand customer behavior, Web Analytics plays an important role to approach and fulfill the needs. The aim of the study is to contribute to the existing work by providing some of the key factors of change in the context of Web Analytics implementation and transition towards a data-driven analytics culture. Web Analytics has become a very important component of many web based system environment and helps in taking business decisions. This paper describes the process of Web Analytics, also the importance and the workings of summary for analytics. For the evaluation of web sites many tools and metrics are available. Basically these metrics are derived from the kind of response of the website users that tells the success of the website. Here we studied the information collected and the analysis done by various authors related to this collection. This paper will hopefully encourage a discussion and further study of Web Analytics. Title First results and future developments of the MIBISOC Project in the IBISlab of the university of parma Abstract Medical Imaging using Bio-Inspired and Soft Computing (MIBISOC) is a Marie Curie Initial Training Network (ITN) within the EU Seventh Framework Programme. MIBISOC is a training programme in which sixteen Early-Stage Researchers (ESRs) are exposed to a wide variety of Soft Computing (SC) and Bio-Inspired Computing (BC) techniques, and face the challenge of applying them to the different situations and problems which characterize medical image processing tasks. Hence, the main goal of the project is to generate new methods and solutions from the combination of the ideas of experts from the area of Medical Imaging (MI) with those working on BC and SC applications. The Intelligent Bio-Inpired Systems laboratory (IBISlab) in the University of Parma is one of the partners of this ITN. In this paper, we describe the work which is being developed in the IBISlab, as well as its future developments and main objectives, within the framework of this ITN. Title How to look inside the brain Abstract In the brain, less is more. Many of the most successful methods in neuroscience research draw their power from stripping away all but the structures or phenomena relevant to a particular experimental question--focusing on the problem at hand, and cutting out the distractions. The birth of the modern field one century ago is due to the discovery of a tissue staining protocol, the 'Golgi Stain', that marks only a small percentage of neurons in nervous tissue, but leaves the vast majority of them invisible, permitting visualization under the microscope of individual trees in what would otherwise have been an impenetrable forest. Today, with the advent of modern genetics and molecular biology, this same principle has been applied across countless brain areas and a broad set of questions about the anatomical configuration, function, development, and plasticity of the nervous system. Many of the most powerful and commonly employed tools--like Green Fluorescence Protein, Channelrhodopsin, and virus-mediated tracing of neuronal projections--are actually biological solutions to completely unrelated problems, such as how to get a jellyfish to glow green, how to convey photosensitivity to a unicellular organism, or how to spread the Rabies virus across an entire nervous system. These research tools, the product of millions of years of evolution (and a few years of human tinkering) yield datasets whose explanatory power draws from the fact that they, like the Golgi Stain, allow researchers to focus on the question at hand and filter out the surrounding noise. Title VISION: cloud-powered sight for all: showing the cloud what you see Abstract We argue that for computers to do more for us, we need to show the cloud what we see and embrace cloud-powered sight for mobile users. We present sample applications that will be empowered by this vision, discuss why the timing is right to tackle it, and offer our initial thoughts on some of the important research challenges. CCS General and reference Document types Reference works Title Implementing evidence-based practices makes a difference in female undergraduate enrollments Abstract While many computing departments may be aware there are "promising" and "proven" practices for recruiting and retaining female students, there seems to be a drive to try new and novel approaches rather than use what is known, or strongly suspected, to be effective. Developing a diverse student body is a long-term multi-faceted process that includes active recruitment, inclusive pedagogy, meaningful curriculum and necessitates student, faculty and institutional support, as well as assessment of progress [1,2,3]. Given all the moving parts and intrinsic challenges of enacting change, departments could make it easier on themselves - and very likely achieve better results - if they intentionally and systematically used practices that have been shown to be effective. This panel will present the rationale for implementing evidence-based practices to increase female enrollments in undergraduate computing departments, and share evidence of successes. Wendy DuBow will examine the concept of evidence-based practices as well as describe briefly the research-based approaches taken by the National Center for Women & Information Technology (NCWIT) to identify the evidence-based practices an academic institution could use and distribute easy-to-use materials explaining such practices. Elizabeth Litzler will supplement this rationale by sharing compelling evaluation data that show that academic departments that implement a variety of evidence-based practices and actively seek to increase their female enrollments actually do see increases. Maureen Biggers will describe her department's recent efforts to increase female undergraduates at Indiana University, which enabled them to double the number of new female majors. Mike Erlinger will discuss Harvey Mudd's recent successes in attracting more Computer Science majors overall, including a large percentage of female students. Title Read, write, and present for ACM SIGUCCS conferences Abstract The Association of Computing Machinery Special Interest Group in University and College Computing Services (ACM SIGUCCS) is made up of professionals who support and manage of information technology services at higher education institutions. SIGUCCS sponsors an annual conference that is drawn together by volunteers. The conference program takes the form of paper authors presenting their findings in 30 minute talks, as part of a panel, or in a poster session. Papers are presented on a variety of tracks such as management, technology, customer support, documentation and training, or instructional technology. The track titles can change over time. Attending and contributing to the SIGUCCS conference program is an opportunity for professional development. This paper seeks to demystify the process of contributing to the SIGUCCS conference program as a reader, author, and presenter and thus make the opportunity to obtain professional development through contributing to the SIGUCCS conference program easier. NA Title The brain Abstract Title ACM career and job center Abstract Title Using empirical insider threat case data to design a mitigation strategy Abstract Title Getting and staying agile Abstract The human side of software development thrives on face-to-face interaction and teamwork. NA Title Five programming tips: start your coding career Abstract Title MentorNet Abstract Title Punch cards vs Java Abstract Title Digital evolution with avida Abstract CCS General and reference Document types General conference proceedings Title Defining the future of multi-gigabit mmWave wireless communications Abstract The widespread availability and use of digital multimedia content has created a need for multi-gigabit wireless connectivity that current commercial standards cannot support. This has driven demand for a single standard that can support advanced applications such as wireless display and docking, as well as more established usages such as network access.. In this talk, we introduce the Wireless Gigabit (WiGig) Alliance, which was formed to meet this need by establishing a unified specification for wireless communication at multi-gigabit speeds. The WiGig Alliance has produced a specification designed to drive a global ecosystem of interoperable products, defining PHY, MAC, and protocol adaptation layers for wireless communication in 60 GHz frequency band Title Support of high-performance i/o protocols over mmWave networks Abstract In this paper, we present a cross-layer framework for support of high performance I/O protocols over mmWave networks. We define an architecture that supports the unique requirements imposed by I/O protocols, in which a protocol adaptation layer is introduced between the mmWave radio and the I/O protocol stack. The adaptation layer on the control plane manages the wireless I/O connection; on the data plane, it translates the short I/O data frames into a custom mmWave data exchange protocol that efficiently utilizes the wireless resources. The proposed architecture hides the wireless medium from the I/O protocol such that the existing I/O stack remains unchanged and can be reused in devices equipped with mmWave radio. Title Nordic Symposium on Cloud Computing and Internet Technologies (NordiCloud) Abstract This is an introduction to the NordiCloud Symposium collocated with WICSA/ECSA 2012. Title Delivering ICT infrastructure for biomedical research Abstract This paper describes an implementation of the Infrastructure-as-a-Service (IaaS) concept for scientific computing and seven service pilot implementations with requirements from biomedical use cases at the CSC - IT Center for Science. The key service design requirements were enabling the use of any scientific software environment the use cases needed to succeed, and delivering the distributed infrastructure ICT resources seamlessly with the local ICT resources for the scientist users. The service concept targets the IT administrators at research organisations and delivers virtualised compute cluster and storage capacity via private network solutions. The virtualised resources can become part of the local cluster as virtual nodes and they can share the same file system as the physical nodes assuming the network performance is sufficient. Extension of the local resources can then be made transparent to enable an easy infrastructure uptake to the scientist end-users. Based on 20 months of service piloting most of the biomedical organisations express a sustained and growing need for the distributed compute and storage resources delivered with the IaaS. We conclude that a successful implementation of the IaaS can improve access and reduce the effort to run expensive ICT infrastructure needed for biomedical research. Title Ultrascan solution modeler: integrated hydrodynamic parameter and small angle scattering computation and fitting tools Abstract UltraScan Solution Modeler (US-SOMO) processes atomic and lower-resolution bead model representations of biological and other macromolecules to compute various hydrodynamic parameters, such as the sedimentation and diffusion coefficients, relaxation times and intrinsic viscosity, and small angle scattering curves, that contribute to our understanding of molecular structure in solution. Knowledge of biological macromolecules' structure aids researchers in understanding their function as a path to disease prevention and therapeutics for conditions such as cancer, thrombosis, Alzheimer's disease and others. US-SOMO provides a convergence of experimental, computational, and modeling techniques, in which detailed molecular structure and properties are determined from data obtained in a range of experimental techniques that, by themselves, give incomplete information. Our goal in this work is to develop the infrastructure and user interfaces that will enable a wide range of scientists to carry out complicated experimental data analysis techniques on XSEDE. Our user community predominantly consists of biophysics and structural biology researchers. A recent search on PubMed reports 9,205 papers in the decade referencing the techniques we support. We believe our software will provide these researchers a convenient and unique framework to refine structures, thus advancing their research. The computed hydrodynamic parameters and scattering curves are screened against experimental data, effectively pruning potential structures into equivalence classes. Experimental methods may include analytical ultracentrifugation, dynamic light scattering, small angle X-ray and neutron scattering, NMR, fluorescence spectroscopy, and others. One source of macromolecular models is X-ray crystallography. However, the conformation in solution may not match that observed in the crystal form. Using computational techniques, an initial fixed model can be expanded into a search space utilizing high temperature molecular dynamic approaches or stochastic methods such as Brownian dynamics. The number of structures produced can vary greatly, ranging from hundreds to tens of thousands or more. This introduces a number of cyberinfrastructure challenges. Computing hydrodynamic parameters and small angle scattering curves can be computationally intensive for each structure, and therefore cluster compute resources are essential for timely results. Input and output data sizes can vary greatly from less than 1 MB to 2 GB or more. Although the parallelization is trivial, along with data size variability there is a large range of compute sizes, ranging from one to potentially thousands of cores with compute time of minutes to hours. In addition to the distributed computing infrastructure challenges, an important concern was how to allow a user to conveniently submit, monitor and retrieve results from within the C++/Qt GUI application while maintaining a method for authentication, approval and registered publication usage throttling. Middleware supporting these design goals has been integrated into the application with assistance from the Open Gateway Computing Environments (OGCE) collaboration team. The approach was tested on various XSEDE clusters and local compute resources. This paper reviews current US-SOMO functionality and implementation with a focus on the newly deployed cluster integration. Title Travel plans: opportunities for ICT Abstract Site-based mobility management or 'travel plans' address the transport problem by engaging with those organisations such as employers that are directly responsible for generating the demand for travel, and hence have the potential to have a major impact on transport policy. To do this effectively however, travel plans need to be reoriented to be made more relevant to the needs of these organisations, whilst the policy framework in which they operate needs modifying to better support their diffusion and enhance their effectiveness. One key barrier, is a lack of available tools for these reoriented travel plans to apply. This paper therefore seeks to help identify potential market niches where ICT developers could help address this issue. Specifically, a framework is presented and suggestions offered as to which particular areas may benefit most from ICT interventions. Title Once you click 'done': Investigating the relationship between disengagement, exhaustion and turnover intentions among university IT professionals Abstract Recent studies have shown that turnover is a major issue in IT environments (Armstrong & Riemenschneider, 2011; Carayon, Schoepke, Hoonakker, Haims, & Brunette, 2006; Moore, 2000a; Rigas, 2009). In fact, the research literature in IT and the popular press suggest that IT professionals are particularly vulnerable to burnout (Armstrong & Riemenschneider, 2011; Kalimo & Toppinen, 1995; McGee, 1996; Moore, 2000a). Using the Job Demands-Resources Model of Burnout as a framework, this study investigates the relationship between disengagement, work exhaustion and turnover intentions among IT professionals in a single university in a major metropolitan area. This study employed a non-experimental, cross-sectional survey research design using a Web-based survey questionnaire to collect data from a population (N=287) of university IT employees in a major metropolitan area. Two instruments were employed in the study: the OLdenburg Burnout Inventory (OLBI) measures work exhaustion and disengagement as developed by Demerouti et al. (2001); the Michigan Organizational Assessment Questionnaire Job Satisfaction Subscale (MOAQ-JSS) measures turnover intentions. The findings from this research indicated that disengagement consistently showed a statistically significant, positive correlation with turnover intentions. The most important conceptual implication of the study is that future investigations of disengagement, work exhaustion and turnover intentions among university IT employees must account for the unique work environment and how those workplace characteristics predict disengagement, work exhaustion and subsequent thoughts about quitting. Title The ACM PODS Alberto O. Mendelzon test-of-time award 2012 Abstract Title Evaluating the impact of incorporating information from social media streams in disaster relief routing Abstract In this paper, we describe a model that can be used to evaluate the impact of using imperfect information when routing supplies for disaster relief. Using two objectives, maximizing the population supported, and minimizing response time, we explore the potential tradeoffs (e.g. more information, but possibly less accurate) of using information from social media streams to inform routing and resource allocation decisions immediately after a disaster. Title Encounters: from talking heads to swarming heads Abstract Robots at home and work has been a key theme in science fiction since the genre began. It is only now that we see this come in to realization, albeit in very basic forms such as the robot vacuum cleaners and various entertainment robotic platforms. In this video we highlight a number of projects woven around the iRobot Create research robot platform and an embodied conversational agent called the Prosthetic Head - an installation work by Stelarc. We start the visual journey by taking a satirical look at some of the parallels between a commercial communication product and the Prosthetic Head. The journey then moves through telepresence robotics, gesture based robot human interaction. The robots featured in the video are driven by an attention and behavioral system. Finally, the video concludes with a preview of the "Swarming Heads" - an interactive installation. CCS General and reference Document types Biographies Title mmWave communications: what is the killer application and how to make it happen? Abstract Title Hypertext as an expression of the rhizomatic self Abstract Developments in the philosophical and social science literature around narrative and identity are seeing the emergence of an understanding of the Self as rhizomatic. Rhizomatics in narrative form can be conceptualized as hypertext. In this position paper, we aim, from a social work perspective, to lay out some of the strengths of conceptualizing the self through a rhizomatic hypertextual narrative, helping to resolve the agency/structure problems we find in the literature on the dialogical self by accounting for context, accounting for multiplicity, providing a metaphor for distant and proximal memory, and allowing for shared nodes where individual lines of flight cross. Title In memoriam: Chionh Eng Wee Abstract Title Amir Pnueli and the dawn of hybrid systems Abstract In this talk I present my own perspective on the beginning (I refer mostly to the period 1988-1998) of hybrid systems research at the computer science side, focusing on the contributions of the late Amir Pnueli, mildly annotated with some opinions of mine. Title The work of Leslie Valiant Abstract On Saturday, May 30, one day before the start of the regular STOC 2009 program, a workshop was held in celebration of Leslie Valiant's 60th birthday. Talks were given by Jin-Yi Cai, Stephen Cook, Vitaly Feldman, Mark Jerrum, Michael Kearns, Mike Paterson, Michael Rabin, Rocco Servedio, Paul Valiant, Vijay Vazirani, and Avi Wigderson. The workshop was organized by Michael Kearns, Rocco Servedio, and Salil Vadhan, with support from the STOC local arrangements team and program committee. To accompany the workshop, here we briefly survey Valiant's many fundamental contributions to the theory of computing. Title Athena lecture: Controlling Access to Programs? Abstract Title The good, the bad, and the provable Abstract Title (abstract only) Abstract Title In Memoriam Eugeny Pankratiev: Faculty of Mechanics and Mathematics, Moscow State University, Moscow, Russia Abstract Title Special session in honor of randy pausch Abstract Randy Pausch is an inspiration to all with his research, teaching, the way he has lived his life, and his courage while confronting pancreatic cancer. This session brings together people he has touched through various phases of his career to discuss his research and legacy. CCS General and reference Document types General literature Title Creative ecologies in action: technology and the workshop-as-artwork Abstract A shift is occurring, particularly evident in art-and- technology practice, in which the artist-led-workshop is transformed into an distinct and distinguishable artistic form. Resulting from, and contributing to, the new access and relationships people have to information, creative culture, materials and like-interested individuals, the "workshop-as-artwork" is herein proposed and outlined. As a set of multiple artistic (material), social and learning agent interactions, thinking this new form as an ecology has shown benefits in terms of the aims and design of these new works, as well as their thinking, planning and execution. Further, from the artist-interventionist point of view, positing the workshop-as-artwork and ecological thinking seeks to update notions of legacy, consequence and significance for the art-and-technology practitioner and his or her audience. Particular attention is given to the links made between the workshop-as-artwork to other historical art forms, the potentials for these structures to provide a means of rendering technologies more convivial, as well as understanding the participative and performative interactions possible within such a form. We conclude with a set of reflections on the artistic context of this work, and possible directions and prospects arising from the "workshop-as-artwork". Title OrientSTS: spatio-temporal sequence searching in flickr Abstract Nowadays, due to the increasing user requirements of efficient and personalized services, a perfect travel plan is urgently needed. However, at present it is hard for people to make a personalized traveling plan. Most of them follow other people's general travel trajectory. So only after finishing their travel, do they know which scene is their favorite, which is not, and what is the perfect order of visits. In this research we propose a novel spatio-temporal sequence (STS) searching, which mainly includes two steps. Firstly, we propose a novel method to detect tourist features of every scene, and its difference in different seasons. Secondly, combined with personal profile and scene features, a set of interesting scenes will be chosen and each scene has a specific weight for each user. The goal of our research is to provide the traveler with the STS, which passes through as many chosen scenes as possible with the maximum weight and the minimum distance within his travel time. We propose a method based on topic model to detect scene features, and provide two approximate algorithms to mine STS: a local optimization algorithm and a global optimization algorithm. System evaluations have been conducted and the performance results show the efficiency. Title Pithy software engineering quotes Abstract Title Interactive poetry: poets and programmers Abstract This short paper uses a technosocial framework to look at the recent advent of Interactive Poetry online and the resulting experience for the viewer. This new poetry aims to provide an interactive artistic experience whereby the viewers or players explore the poetic environment towards the end goal of constructing their own meaning. Title Pithy software engineering quotes Abstract Title Poetry in code Abstract Title Should English be declared the world's official common language? Abstract Title Once upon a time, like never before: the challenge of telling the next story Abstract Readers turn to narrative for certain familiar pleasures; and yet, reading the opening sentences, they hope to find themselves in unknown territory. They want to be lost in a book, transported through a shared act of imagination. If what they read seems too strange, though, if they start to feel truly lost, they're likely to feel anxious, frustrated, even angry. The challenge for the writer, then--the challenge for every discoverer and creator--is to communicate with the past, while guiding the reader (or follower, or user) someplace new. Using examples from writing and cartography, this talk will explore the challenges of discovery, the challenges of presenting those discoveries, and how the presentation itself is often the key to discovery (think Impressionism). It will also consider the tension between intention and inspiration, or good luck. Columbus was headed for India, James Cook mapped the Pacific only because he couldn't find Terra Australis, and both Mark Twain (soon after publishing The Adventures of Huckleberry Finn) and F. Scott Fitzgerald (in the weeks following the publication of The Great Gatsby) expressed despair over their failure to write the books they thought they meant to write. The thing they had discovered--the thing they had created-transcended their own conception of a "good book. Before we can lead anyone anywhere, we need to look clearly at where we are, and to prepare ourselves to see like never before. Title How to improve your writing by standing on your head Abstract Title Newspapers and the new paradigm Abstract CCS General and reference Document types Computing standards, RFCs and guidelines Title Open SVC decoder: a flexible SVC library Abstract This paper describes the Open SVC Decoder project, an open source library which implements the Scalable Video Coding (SVC) standard, the latest standardized by the Joint Video Team (JVT). This library has been integrated into open source players The Core Pocket Media Player (TCPMP) and mplayer, in order to be deployed over different platforms with different operating systems. Title How to contribute research results to internet standardization Abstract The development of new technology is driven by scientific research. The Internet, with its roots in the ARPANET and NSFNet, is no exception. Many of the fundamental, long-term improvements to the architecture, security, end-to-end protocols and management of the Internet originate in the related academic research communities. Even shorter-term, more commercially driven extensions are oftentimes derived from academic research. When interoperability is required, the IETF standardizes such new technology. Timely and relevant standardization benefits from continuous input and review from the academic research community. For an individual researcher, it can however by quite puzzling how to begin to most effectively participate in the IETF and - arguably to a much lesser degree - in the IRTF. The interactions in the IETF are much different than those in academic conferences, and effective participation follows different rules. The goal of this document is to highlight such differences and provide a rough guideline that will hopefully enable researchers new to the IETF to become successful contributors more quickly. Title Underneath the hood: ownership vs. stewardship of the internet Abstract I recently published this essay on CircleID on my thoughts on ICANN's recent decision to launch .XXX and the larger new gTLD program this year. Among other observations, I describe how .XXX marks a historical inflection point, where ICANN's board formally abandoned any responsibility to present an understanding of the ramifications of probable negative externalities ("harms") in setting its policies. That ICANN chose to relinquish this responsibility puts the U.S. government in the awkward position of trying to tighten the few inadequate controls that remain over ICANN, and leaves individual and responsible corporate citizens in the unenviable yet familiar position of bracing for the consequences. Title Differential piracy Abstract In all seriousness, Differential With less seriousness, I would like to talk about Differential So, there has been a lot of work recently on Piracy Preserving Queries and Differential Piracy. These two related technologies exploit new ideas in statistical security. Rather than security through obscurity, the idea is to offer privacy through lack of differentiation (no, not inability to perform basic calculus, more the inability to distinguish between large numbers of very similar things). NA Title Sheer curation for experimental data and provenance Abstract Title Data determination, disambiguation, and referencing in molecular biology Abstract Entity and instance determination, disambiguation, and referencing, referred to as authority control in libraries, are essential for scientific research. This study examines the authority control practices and issues in molecular biology using literature and scenario analyses. The analyses imply that the concept of authority control in molecular biology is associated with three tasks: named entity recognition, disambiguation, and unification. The identified authority control issues were conceptualized as quality problems caused by four sources: inconsistent or incomplete mapping, context changes, entity changes, and changes in entity metadata. This study can inform librarians and repository curators of the needs and issues of authority control in molecular biology and other related disciplines. Title A Measurement Framework for Evaluating Emulators for Digital Preservation Abstract Accessible emulation is often the method of choice for maintaining digital objects, specifically complex ones such as applications, business processes, or electronic art. However, validating the emulator’s ability to faithfully reproduce the original behavior of digital objects is complicated. This article presents an evaluation framework and a set of tests that allow assessment of the degree to which system emulation preserves original characteristics and thus significant properties of digital artifacts. The original system, hardware, and software properties are described. Identical environment is then recreated via emulation. Automated user input is used to eliminate potential confounders. The properties of a rendered form of the object are then extracted automatically or manually either in a target state, a series of states, or as a continuous stream. The concepts described in this article enable preservation planners to evaluate how emulation affects the behavior of digital objects compared to their behavior in the original environment. We also review how these principles can and should be applied to the evaluation of migration and other preservation strategies as a general principle of evaluating the invocation and faithful rendering of digital objects and systems. The article concludes with design requirements for emulators developed for digital preservation tasks. Title Mobile web applications: bringing mobile apps and web together Abstract The popularity of mobile applications is very high and still growing rapidly. These applications allow their users to stay connected with a large number of service providers in seamless fashion, both for leisure and productivity. But service prThe popularity of mobile applications is very high and still growing rapidly. These applications allow their users to stay connected with a large number of service providers in seamless fashion, both for leisure and productivity. But service providers suffer from the high fragmentation of mobile development platforms that force them to develop, maintain and deploy their applications in a large number of versions and formats. The Mobile Web Applications (MobiWebApp [1]) EU project aims to build on Europe's strength in mobile technologies to enable European research and industry to strengthen its position in Web technologies to be active and visible on the mobile applications market. Title The great IPv4 land grab: resource certification for the IPv4 grey market Abstract The era of free IPv4 address allocations has ended and the grey market in IPv4 addresses is now emerging. This paper argues that one cannot and should not try to regulate who sells addresses and at what price, but one does need to provide some proof of ownership in the form of resource certification. In this paper we identify key requirements of resource certification, gained from both theoretical analysis and operational history. We further argue these requirements can be achieved by making use of the existing reverse DNS hierarchy, enhanced with DNS Security. Our analysis compares reverse DNS entries and BGP routing tables and shows this is both feasible and achievable today; an essential requirement as the grey market is also emerging today and solutions are needed now, not years in the future. Title Language choice for safety critical applications Abstract The programming languages currently most popular among software engineers for writing safety critical applications are C and, more recently, C++. The Ada language has been designed with software safety in mind. Although Ada is not perfect concerning safety critical programming, it is far better than C or C++. There have been definitions of subsets of C for safety critical applications, such as MISRA C. Similarly, there are several attempts at defining a safe subset of C++, including MISRA C++ and the Joint Strike Fighter (JSF) Avionics C++ coding standards. The most commonly used safety critical subset of Ada is SPARK. SPARK provides a statically provable fully deterministic subset of Ada. The C and C++ safety critical subsets attempt to achieve a level of safety similar to the full Ada language. That attempt generally fails. This paper concentrates on a comparing the C++ language, including portions of the JSF C++ standard and those features inherited from C, with the full Ada language. CCS General and reference Cross-computing tools and techniques Reliability Title An approach to improving the structure of error-handling code in the linux kernel Abstract The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where it can be reached by gotos whenever an error is detected. This coding style has the advantage of putting all of the error-handling code in one place, which eases understanding and maintenance, and reduces code duplication. Nevertheless, this coding style is not always applied. In this paper, we propose an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes. Title ACM SRC poster: gem: a formal dynamic environment for HPC pedagogy Abstract Computing is undergoing a dramatic shift from sequential to parallel processing. With this shift comes new challenges: how to debug code with multiple processes and threads, and how to effectively teach these programming concepts. Traditional testing tools are ineffective and inefficient when it comes to detecting deep seated logical bugs in parallel code, and often lack a GUI in popular IDE's. No support exists for teaching actual courses based on these tools either. In previous work, my research group provided an MPI testing tool called ISP and integrated it into Eclipse's PTP via the GEM plug-in. I expand on these with enhanced graphical interactions, interception of threaded behavior, and providing a tool for HPC Pedagogy. Title Dynamically scaling applications in the cloud Abstract Scalability is said to be one of the major advantages brought by the cloud paradigm and, more specifically, the one that makes it different to an "advanced outsourcing" solution. However, there are some important pending issues before making the dreamed automated scaling for applications come true. In this paper, the most notable initiatives towards whole application scalability in cloud environments are presented. We present relevant efforts at the edge of state of the art technology, providing an encompassing overview of the trends they each follow. We also highlight pending challenges that will likely be addressed in new research efforts and present an ideal scalable cloud system. Title Experience report: a do-it-yourself high-assurance compiler Abstract Embedded domain-specific languages (EDSLs) are an approach for quickly building new languages while maintaining the advantages of a rich metalanguage. We argue in this experience report that the "EDSL approach" can surprisingly ease the task of building a high-assurance compiler. We do not strive to build a fully formally-verified tool-chain, but take a "do-it-yourself" approach to increase our confidence in compiler-correctness without too much effort. Title Casting doubts on the viability of WiFi offloading Abstract With the advent of the smartphone, mobile data usage has exploded which in turn has created tremendous pressure on cellular data networks. A promising candidate to reduce the impact of cellular data growth is WiFi offloading. However, recent data from our study of two hundred student smartphone users casts doubts on the reductions that can be gained from WiFi offloading. Despite the users operating in a dense university WiFi environment, cellular consumption still dominated overall data usage. We believe the root cause of lesser WiFi utilization can be traced to the WiFi being optimized for laptop WiFi reception rather than the more constrained smartphone WiFi reception. Our work examines the relationship of WiFi versus 3G usage through a variety of aspects including active phone usage, application types, and traffic volume over an eight week period from the Spring of 2012. Title Protecting web applications from SQL injection attacks by using framework and database firewall Abstract SQL Injection attacks are the costly and critical attacks on web applications: it is a code injection technique that allows attackers to obtain unrestricted access to the databases and potentially sensitive information like usernames, passwords, email ids, credit card details present in them. Various techniques have been proposed to address the problem of SQL Injection attack such as defense coding practices, detection and prevention techniques, and intrusion detection systems. However most of these techniques have one or more disadvantages such as requirement for code modification, applicable to limited type of attacks and web applications. In this paper, we discuss a secure mechanism for protecting web applications from SQL Injection attacks by using framework and database firewall. This mechanism uses combined static and dynamic analysis technique. In static analysis, we list URLs, forms, injection points, and vulnerable parameters of web application. Thus, we identify valid queries that could be generated by the application. In dynamic analysis, we use database firewall to monitor runtime generated queries and check them against the whitelist of queries. The experimental setup makes use of real web applications and two open source tools namely Web Application Attack and Audit Framework (w3af) and GreenSQL. We used w3af for listing all the valid queries and GreenSQL as database firewall. The results show that implemented mechanism is capable of detecting all types of SQL Injection attacks without requiring any code modification to the existing web application but with an additional element of deploying a proxy. Title Procedure hopping: a low overhead solution to mitigate variability in shared-L1 processor clusters Abstract Variation in performance and power across manufactured parts and their operating conditions is a well-known issue in advanced CMOS processes. This paper proposes a resilient HW/SW architecture for shared-L1 processor clusters to combat both static and dynamic variations. We first introduce the notion of procedure-level vulnerability ( Title Fan-speed-aware scheduling of data intensive jobs Abstract As server processor power densities increase, the cost of air cooling also grows resulting from higher fan speeds. Our measurements show that vibrations induced by fans in high-end servers and its rack neighbors cause a dramatic drop in hard disk bandwidth, resulting in a corresponding decrease in application performance. In this paper we quantify the performance and energy cost effects of the fan vibrations and propose a disk performance aware thermal, energy and cooling technique. Results show that we can not only meet thermal constraints, but also improve performance by 1.35x as compared to the conventional methods. Title A game theoretic resource allocation for overall energy minimization in mobile cloud computing system Abstract Cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute code remotely. When the cloud infrastructure consists of heterogeneous servers, the mapping between mobile devices and servers plays an important role in determining the energy dissipation on both sides. From an environmental impact perspective, any energy dissipation related to computation should be counted. To achieve energy sustainability, it is important reducing the overall energy consumption of the mobile systems and the cloud infrastructure. Furthermore, reducing cloud energy consumption can potentially reduce the cost of mobile cloud users because the pricing model of cloud services is pay-by-usage. In this paper, we propose a game-theoretic approach to optimize the overall energy in a mobile cloud computing system. We formulate the energy minimization problem as a congestion game, where each mobile device is a player and his strategy is to select one of the servers to offload the computation while minimizing the overall energy consumption. We prove that the Nash equilibrium always exists in this game and propose an efficient algorithm that could achieve the Nash equilibrium in polynomial time. Experimental results show that our approach is able to reduce the total energy of mobile devices and servers compared to a random approach and an approach which only tries to reduce mobile devices alone. Title Reliability analysis in component-based development via probabilistic model checking Abstract Engineering of highly reliable systems requires support of sophisticated design methods allowing software architects to competently decide between various design alternatives already early in the development process. Architecture-based reliability prediction provides such capability. The formalisms and analytical methods employed by existing approaches are however often limited to a single reliability measure (the probability of failure on demand) and consideration of behavioural uncertainty (focusing on the uncertainty in model parameters, not the behaviour itself). This paper presents a formal reliability assessment approach for component-based systems based on the probabilistic model checking of various reliability-related properties specified in probabilistic linear temporal logic (PLTL). The systems are formalized as Markov decision processes (MDP), which allows software architects to encode behavioural uncertainties into the models in terms of nondeterministic (scheduler-decided) choices in the MDP. CCS General and reference Cross-computing tools and techniques Empirical studies CCS General and reference Cross-computing tools and techniques Measurement Title Measured impact of crooked traceroute Abstract Data collected using traceroute-based algorithms underpins research into the Internet's router-level topology, though it is possible to infer false links from this data. One source of false inference is the combination of per-flow load-balancing, in which more than one path is active from a given source to destination, and classic traceroute, which varies the UDP destination port number or ICMP checksum of successive probe packets, which can cause per-flow load-balancers to treat successive packets as distinct flows and forward them along different paths. Consequently, successive probe packets can solicit responses from unconnected routers, leading to the inference of false links. This paper examines the inaccuracies induced from such false inferences, both on macroscopic and ISP topology mapping. We collected macroscopic topology data to 365k destinations, with techniques that both do and do not try to capture load balancing phenomena. We then use alias resolution techniques to infer if a measurement artifact of classic traceroute induces a false router-level link. This technique detected that 2.71% and 0.76% of the links in our UDP and ICMP graphs were falsely inferred due to the presence of load-balancing. We conclude that most per-flow load-balancing does not induce false links when macroscopic topology is inferred using classic traceroute. The effect of false links on ISP topology mapping is possibly much worse, because the degrees of a tier-1 ISP's routers derived from classic traceroute were inflated by a median factor of 2.9 as compared to those inferred with Paris traceroute. Title Making the best of two worlds: a framework for hybrid experiments Abstract In this paper we present the design and implementation of a framework for hybrid experiments that integrates a real-world wireless testbed with a wireless network emulation testbed. The real-world component of the framework, that we call "physical realm", can be used for those experiment aspects that are difficult to perform through emulation, such as real-life communication conditions under the effect of perturbations and weather conditions. Correspondingly, the emulated part of the framework, that we call "emulated realm", can be used for those characteristics that are difficult to address in the real world, such as technologies that are not yet available, large-scale mobility, or for reasons of financial cost. The paper includes a series of proof-of-concept experiments that demonstrate the feasibility of the proposed technique. Title Performance evaluation of DTN implementations on a large-scale network emulation testbed Abstract In this paper we present a series of experiments that evaluate the performance of two DTN implementations, DTN2 and IBR-DTN, in urban mobility scenarios. The experiments were carried out on the wireless network emulation testbed named QOMB, which was extended to support such DTN evaluations. Our quantitative assessment verified the basic behavior of the DTN implementations, but also identified scalability issues for DTN2 in scenarios with as few as 26 nodes. These results emphasize the need for more extensive large-scale experiments with DTN applications and protocols for comprehensive evaluations in view of functionality validation and performance optimization. This can be readily achieved through the use of emulation testbeds such as the one that we have developed. Title A testbed for measuring battery discharge behavior Abstract We describe a work-in-progress testbed for studying how the energy performance of protocols and applications for small wireless devices is affected by battery discharge behavior. Title Coarse-grained topology estimation via graph sampling Abstract In many online networks, nodes are partitioned into Title ShadowStream: performance evaluation as a capability in production internet live streaming networks Abstract As live streaming networks grow in scale and complexity, they are becoming increasingly difficult to evaluate. Existing evaluation methods including lab/testbed testing, simulation, and theoretical modeling, lack either scale or realism. The industrial practice of gradually-rolling-out in a testing channel is lacking in controllability and protection when experimental algorithms fail, due to its passive approach. In this paper, we design a novel system called ShadowStream that introduces evaluation as a built-in capability in production Internet live streaming networks. ShadowStream introduces a simple, novel, transparent embedding of experimental live streaming algorithms to achieve safe evaluations of the algorithms during large-scale, real production live streaming, despite the possibility of large performance failures of the tested algorithms. ShadowStream also introduces transparent, scalable, distributed experiment orchestration to resolve the mismatch between desired viewer behaviors and actual production viewer behaviors, achieving experimental scenario controllability. We implement ShadowStream based on a major Internet live streaming network, build additional evaluation tools such as deterministic replay, and demonstrate the benefits of ShadowStream through extensive evaluations. Title The role of psychophysics laws in quality of experience assessment: a video streaming case study Abstract The wide range of multimedia services has been attracting more users every day, while the increasing number of users plays an important role in the competitive advantage for network/service providers. Due to the fact that the users are the ones who pay for services, their satisfaction is an important goal for each provider in order to survive in the highly competitive market of multimedia services. Because of the nature of multimedia services, users can instantly detect any quality disturbances. For example, users might tolerate several interruptions during a download or upload process, but they may get irritated as soon as they experience a small amount of voice or video disturbance while they are watching their favorite movie through the internet for instance. Thus, quality of experience (QoE) should be maintained at a steady and acceptable level to satisfy users. To do so, it is necessary to identify the contributing factors in the network which affect multimedia quality. Generally speaking, delay, jitter, loss, and bandwidth have considerable impact on multimedia service quality. Because quality of service (QoS) is among the greatest impacting factors of QoE, it seems required to define a quantitative relation between QoE and QoS in order to keep the QoE at an acceptable level. This paper aims to benefit from psychophysics laws to devise quantitative relations which explain the interdependency of QoE and QoS. The proposed quantitative relations are expressed as equations which are then compared in theoretical and experimental setting to indicate which one can better reflect the relationship between QoE and QoS. A video streaming service affected by packet loss was also chosen as a candidate for our test bed. The test-bed results are then estimated by non linear regression and then validated by goodness of fit indexes. In this way, it can be examined how strongly each equation can express the dependency between QoE and QoS. Title CATS: cache aware task-stealing based on online profiling in multi-socket multi-core architectures Abstract Multi-socket Multi-core architectures with shared caches in each socket have become mainstream when a single multi-core chip cannot provide enough computing capacity for high performance computing. However, traditional task-stealing schedulers tend to pollute the shared cache and incur severe cache misses due to their randomness in stealing. To address the problem, this paper proposes a Cache Aware Task-Stealing (CATS) scheduler, which uses the shared cache efficiently with an online profiling method and schedules tasks with shared data to the same socket. CATS adopts an online DAG partitioner based on the profiling information to ensure tasks with shared data can efficiently utilize the shared cache. One outstanding novelty of CATS is that it does not require any extra user-provided information. Experimental results show that CATS can improve the performance of memory-bound programs up to 74.4% compared with the traditional task-stealing scheduler. Title Interference-driven resource management for GPU-based heterogeneous clusters Abstract GPU-based clusters are increasingly being deployed in HPC environments to accelerate a variety of scientific applications. Despite their growing popularity, the GPU devices themselves are under-utilized even for many computationally-intensive jobs. This stems from the fact that the typical GPU usage model is one in which a host processor periodically offloads computationally intensive portions of an application to the coprocessor. Since some portions of code cannot be offloaded to the GPU (for example, code performing network communication in MPI applications), this usage model results in periods of time when the GPU is idle. GPUs could be time-shared across jobs to "fill" these idle periods, but unlike CPU resources such as the cache, the effects of sharing the GPU are not well understood. Specifically, two jobs that time-share a single GPU will experience resource contention and interfere with each other. The resulting slow-down could lead to missed job deadlines. Current cluster managers do not support GPU-sharing, but instead dedicate GPUs to a job for the job's lifetime. In this paper, we present a framework to predict and handle interference when two or more jobs time-share GPUs in HPC clusters. Our framework consists of an analysis model, and a dynamic interference detection and response mechanism to detect excessive interference and restart the interfering jobs on different nodes. We implement our framework in Torque, an open-source cluster manager, and using real workloads on an HPC cluster, show that interference-aware two-job colocation (although our method is applicable to colocating more than two jobs) improves GPU utilization by 25%, reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%. Title Secure lazy provisioning of virtual desktops to a portable storage device Abstract It is the software and data stored on a 'personal computer' that makes it personal. These contents can be conveniently stored as a disk image on a server and made available on the users' personal storage as and when required through CCS General and reference Cross-computing tools and techniques Metrics Title Measured impact of crooked traceroute Abstract Data collected using traceroute-based algorithms underpins research into the Internet's router-level topology, though it is possible to infer false links from this data. One source of false inference is the combination of per-flow load-balancing, in which more than one path is active from a given source to destination, and classic traceroute, which varies the UDP destination port number or ICMP checksum of successive probe packets, which can cause per-flow load-balancers to treat successive packets as distinct flows and forward them along different paths. Consequently, successive probe packets can solicit responses from unconnected routers, leading to the inference of false links. This paper examines the inaccuracies induced from such false inferences, both on macroscopic and ISP topology mapping. We collected macroscopic topology data to 365k destinations, with techniques that both do and do not try to capture load balancing phenomena. We then use alias resolution techniques to infer if a measurement artifact of classic traceroute induces a false router-level link. This technique detected that 2.71% and 0.76% of the links in our UDP and ICMP graphs were falsely inferred due to the presence of load-balancing. We conclude that most per-flow load-balancing does not induce false links when macroscopic topology is inferred using classic traceroute. The effect of false links on ISP topology mapping is possibly much worse, because the degrees of a tier-1 ISP's routers derived from classic traceroute were inflated by a median factor of 2.9 as compared to those inferred with Paris traceroute. Title Semantic mining on customer survey Abstract Business intelligence aims to support better business decision-making. Customer survey is priceless asset for intelligent business decision-making. However, business analysts usually have to read hundreds of textual comments and tabular data in survey to manually dig out the necessary information to feed business intelligence models and tools. This paper introduces a business intelligence system to solve this problem by extensively utilizing Semantic Web technologies. Ontology based knowledge extraction is the key to extract interesting terms and understand the logic concept of them. All knowledge extracted forms a semantic knowledge base. Flexible user queries and intelligent analysis can be easily issued to the system over the semantic data store through standard protocol. Besides resolving problems in theory, we designed a flexible, intuitive user interaction interface to explain and present the analysis result for business analysts. Through the real usage of this system, it is validated that our system gives good solution for semantic mining on customer survey for business intelligence. Title Predicting software complexity by means of evolutionary testing Abstract One characteristic that impedes software from achieving good levels of maintainability is the increasing complexity of software. Empirical observations have shown that typically, the more complex the software is, the bigger the test suite is. Thence, a relevant question, which originated the main research topic of our work, has raised: "Is there a way to correlate the complexity of the test cases utilized to test a software product with the complexity of the software under test?". This work presents a new approach to infer software complexity with basis on the characteristics of automatically generated test cases. From these characteristics, we expect to create a test case profile for a software product, which will then be correlated to the complexity, as well as to other characteristics, of the software under test. This research is expected to provide developers and software architects with means to support and validate their decisions, as well as to observe the evolution of a software product during its life-cycle. Our work focuses on object-oriented software, and the corresponding test suites will be automatically generated through an emergent approach for creating test data named as Evolutionary Testing. Title Runtime monitoring of software energy hotspots Abstract GreenIT has emerged as a discipline concerned with the optimization of software solutions with regards to their energy consumption. In this domain, most of the state-of-the-art solutions concentrate on coarse-grained approaches to monitor the energy consumption of a device or a process. However, none of the existing solutions addresses in-process energy monitoring to provide in-depth analysis of a process energy consumption. In this paper, we therefore report on a fine-grained runtime energy monitoring framework we developed to help developers to diagnose energy hotspots with a better accuracy than the state-of-the-art. Concretely, our approach adopts a 2-layer architecture including OS-level and process-level energy monitoring. OS-level energy monitoring estimates the energy consumption of processes according to different hardware devices (CPU, network card). Process-level energy monitoring focuses on Java-based applications and builds on OS-level energy monitoring to provide an estimation of energy consumption at the granularity of classes and methods. We argue that this per-method analysis of energy consumption provides better insights to the application in order to identify potential energy hotspots. In particular, our preliminary validation demonstrates that we can monitor energy hotspots of Jetty web servers and monitor their variations under stress scenarios. Title Structured merge with auto-tuning: balancing precision and performance Abstract Software-merging techniques face the challenge of finding a balance between precision and performance. In practice, developers use unstructured-merge (i.e., line-based) tools, which are fast but imprecise. In academia, many approaches incorporate information on the structure of the artifacts being merged. While this increases precision in conflict detection and resolution, it can induce severe performance penalties. Striving for a proper balance between precision and performance, we propose a structured-merge approach with auto-tuning. In a nutshell, we tune the merge process on-line by switching between unstructured and structured merge, depending on the presence of conflicts. We implemented a corresponding merge tool for Java, called JDime. Our experiments with 8 real-world Java projects, involving 72 merge scenarios with over 17 million lines of code, demonstrate that our approach indeed hits a sweet spot: While largely maintaining a precision that is superior to the one of unstructured merge, structured merge with auto-tuning is up to 12 times faster than purely structured merge, 5 times on average. Title Maintainability prediction of object-oriented software system by multilayer perceptron model Abstract To accomplish software quality, correct estimation of maintainability is essential. However there is a complex and non-linear relationship between object-oriented metrics and maintainability. Thus maintainability of object-oriented software can be predicted by applying sophisticated modeling techniques like artificial neural network. Multilayer Perceptron neural network is chosen for the present study because of its robustness and adaptability. This paper presents the prediction of maintainability by using a Multilayer Perceptron (MLP) model and compares the results of this investigation with other models described earlier. It is found that efficacy of MLP model is much better than both Ward and GRNN network models. Title Dynamic programming of the towers Abstract NA Title A testbed for measuring battery discharge behavior Abstract We describe a work-in-progress testbed for studying how the energy performance of protocols and applications for small wireless devices is affected by battery discharge behavior. Title Performance evaluation of DTN implementations on a large-scale network emulation testbed Abstract In this paper we present a series of experiments that evaluate the performance of two DTN implementations, DTN2 and IBR-DTN, in urban mobility scenarios. The experiments were carried out on the wireless network emulation testbed named QOMB, which was extended to support such DTN evaluations. Our quantitative assessment verified the basic behavior of the DTN implementations, but also identified scalability issues for DTN2 in scenarios with as few as 26 nodes. These results emphasize the need for more extensive large-scale experiments with DTN applications and protocols for comprehensive evaluations in view of functionality validation and performance optimization. This can be readily achieved through the use of emulation testbeds such as the one that we have developed. Title Making the best of two worlds: a framework for hybrid experiments Abstract In this paper we present the design and implementation of a framework for hybrid experiments that integrates a real-world wireless testbed with a wireless network emulation testbed. The real-world component of the framework, that we call "physical realm", can be used for those experiment aspects that are difficult to perform through emulation, such as real-life communication conditions under the effect of perturbations and weather conditions. Correspondingly, the emulated part of the framework, that we call "emulated realm", can be used for those characteristics that are difficult to address in the real world, such as technologies that are not yet available, large-scale mobility, or for reasons of financial cost. The paper includes a series of proof-of-concept experiments that demonstrate the feasibility of the proposed technique. CCS General and reference Cross-computing tools and techniques Evaluation Title Gordon: design, performance, and experiences deploying and supporting a data intensive supercomputer Abstract The Title Efficient update data generation for DBMS benchmarks Abstract It is without doubt that industry standard benchmarks have been proven to be crucial to the innovation and productivity of the computing industry. They are important to the fair and standardized assessment of performance across different vendors, different system versions from the same vendor and across different architectures. Good benchmarks are even meant to drive industry and technology forward. Since at some point, after all reasonable advances have been made using a particular benchmark even good benchmarks become obsolete over time. This is why standard consortia periodically overhaul their existing benchmarks or develop new benchmarks. An extremely time and resource consuming task in the creation of new benchmarks is the development of benchmark generators, especially because benchmarks tend to become more and more complex. The first version of the Parallel Data Generation Framework (PDGF), a generic data generator, was capable of generating data for the initial load of arbitrary relational schemas. It was, however, not able to generate data for the actual workload, i.e. input data for transactions (insert, delete and update), incremental load etc., mainly because it did not understand the notion of updates. Updates are data changes that occur over time, e.g. a customer changes address, switches job, gets married or has children. Many benchmarks, need to reflect these changes during their workloads. In this paper we present PDGF Version 2, which contains extensions enabling the generation of update data. Title A unified approach to fully lazy sharing Abstract We give an axiomatic presentation of sharing-via-labelling for weak lambda-calculi, that makes it possible to formally compare many different approaches to fully lazy sharing, and obtain two important results. We prove that the known implementations of full laziness are all equivalent in terms of the number of beta-reductions performed, although they behave differently regarding the duplication of terms. We establish a link between the optimality theories of weak lambda-calculi and first-order rewriting systems by expressing fully lazy lambda-lifting in our framework, thus emphasizing the first-order essence of weak reduction. Title The NWSC benchmark suite using scientific throughput to measure supercomputer performance Abstract The NCAR-Wyoming Supercomputing Center (NWSC) will begin operating in June 2012, and will house NCAR's next generation HPC system. The NWSC will support a broad spectrum of Earth Science research drawn from a user community with diverse requirements for computing, storage, and data analysis resources. To ensure that the NWSC satisfies the needs of this community, the procurement benchmarking process was driven by science requirements from the start. We will discuss the science objectives for NWSC, translating scientific goals into technical requirements for a machine, and assembling a benchmark suite from community science models and synthetic tests to measure the technical capabilities of the proposed HPC systems. We will also talk about the benchmark analysis process, extending the benchmark suite as a testing tool over the life of the machine, and the applicability of the NWSC benchmarking suite to other HPC centers. Title The mission: teaming with microsoft and adobe for licensing Abstract If you are involved in software licensing and distribution, you will most always deal with two top-tier publishers--Microsoft and Adobe. Students and employees at your school will want to use leading edge technology software from these companies to accomplish their work. It is your task to find the best opportunities for cost savings and delivery methods for the greatest use of these products. Available licensing programs from these publishers can be difficult to understand because of contractual commitments and misunderstood complexities. Licensing criteria for purchasing and distributing is different for each of these publishers. Do you need product support? How do Microsoft and Adobe offerings compare with open source products? Are the same products adequate for institutional and student use? What about virtualization and cloud licensing? This calls for teamwork involving you, the publishers, resellers, your institution, and end-users. Let us explore what you need for effective software licensing and distribution of Microsoft and Adobe products and contain costs, establish product use standards, and provide long-term organizational benefits. Title Charting a course for software licensing and distribution Abstract Software licensing and distribution poses many opportunities for cost savings while delivering a greater ability for students and employees to use cost-effective, leading edge technology to accomplish their tasks. Using available licensing programs from major publishers can be difficult because of the contractual commitments and often misunderstood complexities of those programs. Each publisher has its own licensing criteria for obtaining products. Product support must be considered when acquiring any title for campus-wide use. Educational institutions will often embrace "open source" software to overcome these obstacles. While that is a possible solution, commercial software products are often a better choice. West Virginia University embarked upon a centralized approach for widely used commercial software in 2002. The program has evolved to providing products from several publishers. The campus-wide distribution of software uses a combination of cost recovery and centralized funding. Students and employees can purchase products at significant savings for personal use. Institutional users can easily obtain the tools to prepare their course material and share those same tools with their students. Software licensing involves several entities within the institution such as the legal department, procurement, network administration, customer support, academic affairs, student affairs, information security, and information technology. With that in mind, let's explore how you can develop your software licensing and distribution program so that it becomes a key component for driving cost containment, integrates product use standards, and provides long-term organizational benefits. Title Comparing a video projector and an inter-PC screen broadcasting system in a computer laboratory Abstract An experiment of comparing presentation tools for a computer laboratory using a usability test is shown. We compared the cognitive effects of presentation tools, a video projector, and an inter-PC screen broadcasting system, in a computer laboratory. We have quantitative results of the presentation tools' cognitive effects on users. The results of the experimental measurements were that the projector was better if there was a small amount of data on one screen and the screen broadcasting system was better if there was a large amount of data on one screen. Title Making SOA work in a healthcare company Abstract Making SOA work in a large and diverse healthcare company is not just about bridging the gap between business and IT. It is also about bridging the gap between the technologies of yesterday, today and tomorrow. As Health Net has grown by acquiring other entities, we have acquired a landscape of diverse assets written with many languages, hosted on many platforms. These range from Java on WebLogic to .Net to RPG on iSeries to CICS on zSeries to COBOL on OpenVMS. Integrating these systems goes beyond simple business services. Successful integration ultimately requires elevating IT teams to the vision of a SOA enterprise as defined by an enterprise reference architecture. Educating our IT project teams in the fundamentals of SOA design and development has involved special approaches and a commitment to mentoring and continuous education in the enterprise. This discussion covers some of the challenges, successes, and lessons learned that we have encountered in bringing SOA to Health Net. Title Expressing advanced user preferences in component installation Abstract State of the art component-based We present an architecture that allows to express advanced user preferences about package selection in FOSS distributions. The architecture is composed by a distribution-independent format for describing available and installed packages called CUDF (Common Upgradeability Description Format), and a foundational language called MooML to specify optimization criteria. We present the syntax and semantics of CUDF and MooML, and discuss the partial evaluation mechanism of MooML which allows to gain efficiency in package dependency solvers. Title True value: assessing and optimizing the cost of computing at the data center level Abstract There are five main components to the cost of delivering computing in a data center: (i) the construction of the data center building itself; (ii) the power and cooling infrastructure for the data center; (iii) the acquisition cost of the servers that populate the data center; (iv) the cost of electricity to power (and cool) the servers; and (v) the cost of managing those servers. We first study the fundamental economics of operating such a data center with a model that captures the first four costs. We call these the physical cost, as it does not include the labor cost. We show that it makes economic sense to design data centers for relatively low power densities, and that increasing server utilization is an efficient way to reduce total cost of computation. We then develop a cost/performance model that includes the management cost and allows the evaluation of the optimal server size for consolidation. We show that, for a broad range of operating and cost conditions, servers with 4 to 16 processor sockets result in the lowest total cost of computing. CCS General and reference Cross-computing tools and techniques Experimentation CCS General and reference Cross-computing tools and techniques Estimation CCS General and reference Cross-computing tools and techniques Design Title Flashboost: design of flash memory buffer cache mechanism for video-on-demand Abstract A magnetic disk is a serious bottleneck which limits the scalability of a video server due to its head seek overhead. For a video server, Title Optimal WCET-aware code selection for scratchpad memory Abstract We propose the first polynomial-time code selection algorithm for minimising the worst-case execution time of a non-nested loop executed on a fully pipelined processor that uses scratchpad memory to replace the instruction cache. The time complexity of our algorithm is Title Quantitative system validation in model driven design Abstract The European STREP project Title Functionality-rich versus minimalist platforms: a two-sided market analysis Abstract Should a new "platform" target a functionality-rich but complex and expensive design or instead opt for a bare-bone but cheaper one? This is a fundamental question with profound implications for the eventual success of any platform. A general answer is, however, elusive as it involves a complex trade-off between benefits and costs. The intent of this paper is to introduce an approach based on standard tools from the field of economics, which can offer some insight into this difficult question. We demonstrate its applicability by developing and solving a generic model that incorporates key interactions between platform stakeholders. The solution confirms that the "optimal" number of features a platform should offer strongly depends on variations in cost factors. More interestingly, it reveals a high sensitivity to small relative changes in those costs. The paper's contribution and motivation are in establishing the potential of such a cross-disciplinary approach for providing qualitative and quantitative insights into the complex question of platform design. Title ACM SRC poster: optimizing all-to-all algorithm for PERCS network using simulation Abstract Communication algorithms play a crucial role in the performance of large-scale parallel systems. They are implemented in runtime systems and used in most parallel applications as a critical component. As vendors are willing to design new custom networks with significantly different performance properties for their new supercomputers, designing new efficient communication algorithms is an inevitable challenge. This task is desirable to be done before the machine comes online since inefficient use of the system before the new algorithm's availability is a huge waste of a possibly hundreds of millions of dollars resource. Here, we demonstrate the usability of our simulation framework, BigSim, in meeting this challenge. Using BigSim, we observe that the commonly used Pairwise-Exchange algorithm for all-to-all communication pattern is suboptimal for a supernode of the PERCS network (two-level directly connected similar to Dragonfly topology). We designed a new all-to-all algorithm for it and predict a five-fold performance improvement for large message sizes using this algorithm. Title Can offloading save energy for popular apps? Abstract Offloading tasks to cloud is one of the proposed solutions for extending battery life of mobile devices. Most prior research focuses on offloading computation, leaving communication-related tasks out of scope. However, most popular applications today involve intensive communication that consumes a significant part of the overall energy. Hence, we currently do not know how feasible it is to use offloading for saving energy in such apps. In this paper, we first show that it is possible to save energy by offloading communication-related tasks of the app to the cloud. We use an open source Twitter client, AndTweet, as a case study. However, using a set of popular open source applications, we also show that existing apps contain constraints that have to be released with code modifications before offloading can be profitable, and that the potential energy savings depend on many communication parameters. We therefore develop two tools: the first to identify the constraints and the other for fine-grained communication energy estimation. We exemplify the tools and explain how they could be used to help offloading parts of popular apps successfully. Title A Unified Methodology for Scheduling in Distributed Cyber-Physical Systems Abstract A distributed cyber-physical system (DCPS) may receive and induce energy-based interference to and from its environment. This article presents a model and an associated methodology that can be used to (i) schedule tasks in DCPSs to ensure that the thermal effects of the task execution are within acceptable levels, and (ii) verify that a given schedule meets the constraints. The model uses coarse discretization of space and linearity of interference. The methodology involves characterizing the interference of the task execution and fitting it into the model, then using the fitted model to verify a solution or explore the solution space. Title On the scalability of the clusters-booster concept: a critical assessment of the DEEP architecture Abstract Cluster computers are dominating high performance computing (HPC) today. The success of this architecture is based on the fact that it proffits from the improvements provided by mainstream computing well known under the label of Moore's law. But trying to get to Exascale within this decade might require additional endeavors beyond surfing this technology wave. In order to find possible directions for the future we review Amdahl's and Gustafson's thoughts on scalability. Based on this analysis we propose an advance architecture combining a Cluster with a so called Booster element comprising of accelerators interconnected by a high performance fabric. We argue that this architecture provides significant advantages compared to today's accelerated clusters and might pave the way for clusters into the era of Exascale computing. The DEEP project has been presented aiming for an implementation of this concept. Six applications from fields having the potential to exploit Exascale systems will be ported to DEEP.We analyze one application in detail and explore the consequences of the constraints of the DEEP systems on its scalability. Title Game cloud design with virtualized CPU/GPU servers and initial performance results Abstract Cloud gaming provides game-on-demand (GoD) services over the Internet cloud. The goal is to achieve faster response time and higher QoS. The video game is rendered remotely on the game cloud and decoded on thin client devices such as a tablet computer or a smartphone. We design a game cloud with a virtualized cluster of CPU/GPU servers at USC GamePipe Laboratory. We enable interactive gaming by taking full advantage of the cloud and local resources for high quality of experience (QoE) gaming. We report some preliminary performance results on game latency and frame rate. We find 109 ~ 131 ms latency in using the game cloud, which is 14 ~ 38% lower than 200 ms latency often experienced on a thin local computer. Moreover, the frame rate from the cloud is 25 ~ 35% higher than that of using a client computer alone. Base on these results, we anticipate game cloud has a performance gain or QoS improvement of 14 ~ 38% against that of using a thin client mobile device. Potential applications of the game cloud for high-performance scientific computing are also discussed in the paper. Title Identifying optimal multicore cache hierarchies for loop-based parallel programs via reuse distance analysis Abstract Understanding multicore memory behavior is crucial, but can be challenging due to the complex cache hierarchies employed in modern CPUs. In today's hierarchies, performance is determined by complicated thread interactions, such as interference in shared caches and replication and communication in private caches. Researchers normally perform extensive simulations to study these interactions, but this can be costly and not very insightful. An alternative is multicore reuse distance (RD) analysis, which can provide extremely rich information about multicore memory behavior. In this paper, we apply multicore RD analysis to better understand cache system design. We focus on loop-based parallel programs, an important class of programs for which RD analysis provides high accuracy. We propose a novel framework to identify optimal multicore cache hierarchies, and extract several new insights. We also characterize how the optimal cache hierarchies vary with core count and problem size. CCS General and reference Cross-computing tools and techniques Performance Title Principles of Robust Timing over the Internet Abstract Title Bound by the Speed of Light Abstract Title Poster: new features of the PAPI hardware counter library Abstract The PAPI library has evolved from a cross-platform interface for accessing processor hardware performance counters to a component-based library for simultaneously accessing hardware monitoring information from various components of a computer system, including processors, memory controllers, network switches and interface cards, I/O subsystem, temperature sensors and power meters, and GPU counters. A GPU component is discussed. A new feature called user-defined events adds a layer of abstraction that allows users to define new metrics by combining previously defined events and machine constants and to share those metrics with other users. One current effort is the development of a PAPI interface for virtual machines, called PAPI-V, which will allow users to access processor and component hardware performance information from applications running within virtual machines. PAPI continues to be widely used by application developers and by higher level performance analysis tools such as TAU, PerfSuite, Scalasca, IPM, HPCtoolkit, Vampir, and CrayPat. Title Poster: characterizing the impact of memory-access techniques on AMD fusion Abstract The cost of data transfers over PCI-Express often limits application performance on traditional discrete GPUs. To address this, AMD Fusion introduces a novel architecture that fuses the CPU and GPU onto a single die and connects the two with a high-performance memory controller. This architecture features a shared memory space between CPU and GPU, enabling several new memory access techniques that are not available on discrete architectures. For instance, a kernel running on the GPU can now directly access a host memory buffer and vice versa. As an initial step towards understanding the implications of the fused CPU+GPU architecture to heterogeneous computing, we characterize the performance impact of various memory-access techniques on applications running on an AMD Fusion platform (i.e., Llano A8-3850). The experimental results show that AMD Fusion can outperform a discrete GPU of the same performance class by as much as 4-fold for a memory-bound kernel. Title Electronic poster: eeclust: energy-efficient cluster computing Abstract The eeClust project aims at reducing the energy consumption of applications on a HPC cluster by an integrated approach of analysis, efficient management of hardware power-states and monitoring of the clusters power consumption. The application is traced and the trace file is analyzed - manually with Vampir and automatically with Scalasca - to determine phases in the application with non-optimal hardware utilization. The source-code is then instrumented with API calls to control a daemon which switches hardware power-states at runtime. This daemon is aware of shared resources (e.g. the network interface) and only switches a resource to a lower power-state when all processes sharing that resource do not need it. The ParaStation Grid Monitor is used to monitor and visualize the power consumption and hardware usage of the cluster. This poster gives an overview of the project and presents the analysis, hardware management and monitoring aspects in more detail. Title Automated inference of goal-oriented performance prediction functions Abstract Understanding the dependency between performance metrics (such as response time) and software configuration or usage parameters is crucial in improving software quality. However, the size of most modern systems makes it nearly impossible to provide a complete performance model. Hence, we focus on scenario-specific problems where software engineers require practical and efficient approaches to draw conclusions, and we propose an automated, measurement-based model inference method to derive goal-oriented performance prediction functions. For the practicability of the approach it is essential to derive functional dependencies with the least possible amount of data. In this paper, we present different strategies for automated improvement of the prediction model through an adaptive selection of new measurement points based on the accuracy of the prediction model. In order to derive the prediction models, we apply and compare different statistical methods. Finally, we evaluate the different combinations based on case studies using SAP and SPEC benchmarks. Title Empowering developers to estimate app energy consumption Abstract Battery life is a critical performance and user experience metric on mobile devices. However, it is difficult for app developers to measure the energy used by their apps, and to explore how energy use might change with conditions that vary outside of the developer's control such as network congestion, choice of mobile operator, and user settings for screen brightness. We present an energy emulation tool that allows developers to estimate the energy use for their mobile apps on their development workstation itself. The proposed techniques scale the emulated resources including the processing speed and network characteristics to match the app behavior to that on a real mobile device. We also enable exploring multiple operating conditions that the developers cannot easily reproduce in their lab. The estimation of energy relies on power models for various components, and we also add new power models for components not modeled in prior works such as AMOLED displays. We also present a prototype implementation of this tool and evaluate it through comparisons with real device energy measurements. Title Performance comparison and node failure assessment of energy efficient two level balanced and progressive sensor networks Abstract This research is concerned with the design of Tree WSNs. This research proves that configuration plays a vital role in energy optimisation, node failure and network lifetime when designing a Tree WSN. We compare a Progressive Two Level Tree WSN with a Balanced Two Level Tree WSN. Our simulation results prove that a Progressive configuration has two advantages, over a Balanced configuration, which are less computations required to complete each process and more tolerance of node failures. These Progressive configuration advantages both lead to more energy efficiency compared to a Balanced configuration. Therefore, the lifetime of a Progressive Two Level Tree WSN is longer than that of a Balanced Two Level Tree WSN. Title A systematic process for efficient execution on Intel's heterogeneous computation nodes Abstract Heterogeneous architectures (mainstream CPUs with accelerators/co-processors) are expected to become more prevalent in high performance computing clusters. This paper deals specifically with attaining efficient execution on nodes which combine Intel's multicore Sandy Bridge chips with MIC manycore chips. The architecture and software stack for Intel's heterogeneous computation nodes attempt to make migration from the now common multicore chips to the many-core chips straightforward. However, specific execution characteristics are favored by these manycore chips such as making use of the wider vector instructions, minimal inter-thread conflicts, etc. Additionally manycore chips have lower clock speed and no unified last-level cache. As a result, and as we demonstrate in this paper, it will commonly be the case that not all parts of an application will execute more efficiently on the manycore chip than on the multicore chip. This paper presents a process, based on measurements of execution on Westmere-based multicore chips, which can accurately predict which code segments will execute efficiently on the manycore chips and illustrates and evaluates its application to three substantial full programs -- HOMME, MOIL and MILC. The effectiveness of the process is validated by verifying scalability of the specific functions and loops that were recommended for MIC execution on a Knights Ferry computation node. Title Evolving NK-complexity for evolutionary solvers Abstract In this paper we empirically investigate the structural characteristics that can help to predict the complexity of NK-landscape instances for estimation of distribution algorithms (EDAs). We evolve instances that maximize the EDA complexity in terms of its success rate. Similarly, instances that minimize the algorithm complexity are evolved. We then identify network measures, computed from the structures of the NK-landscape instances, that have a statistically significant difference between the set of easy and hard instances. The features identified are consistently significant for different values of $N$ and $K$. CCS General and reference Cross-computing tools and techniques Validation Title Deterministic parallelism via liquid effects Abstract Shared memory multithreading is a popular approach to parallel programming, but also fiendishly hard to get right. We present Title Verifying GPU kernels by test amplification Abstract We present a novel technique for verifying properties of data parallel GPU programs via test Title Partially Evaluating Finite-State Runtime Monitors Ahead of Time Abstract Finite-state properties account for an important class of program properties, typically related to the order of operations invoked on objects. Many library implementations therefore include manually written finite-state monitors to detect violations of finite-state properties at runtime. Researchers have recently proposed the explicit specification of finite-state properties and automatic generation of monitors from the specification. However, runtime monitoring only shows the presence of violations, and typically cannot prove their absence. Moreover, inserting a runtime monitor into a program under test can slow down the program by several orders of magnitude. In this work, we therefore present a set of four static whole-program analyses that partially evaluate runtime monitors at compile time, with increasing cost and precision. As we show, ahead-of-time evaluation can often evaluate the monitor completely statically. This may prove that the program cannot violate the property on any execution or may prove that violations do exist. In the remaining cases, the partial evaluation converts the runtime monitor into a residual monitor. This monitor only receives events from program locations that the analyses failed to prove irrelevant. This makes the residual monitor much more efficient than a full monitor, while still capturing all property violations at runtime. We implemented the analyses in Clara, a novel framework for the partial evaluation of AspectJ-based runtime monitors, and validated our approach by applying Clara to finite-state properties over several large-scale Java programs. Clara proved that most of the programs never violate our example properties. Some programs required monitoring, but in those cases Clara could often reduce the monitoring overhead to below 10%. We observed that several programs did violate the stated properties. Title GKLEE: concolic verification and test generation for GPUs Abstract Programs written for GPUs often contain correctness errors such as races, deadlocks, or may compute the wrong result. Existing debugging tools often miss these errors because of their limited input-space and execution-space exploration. Existing tools based on conservative static analysis or conservative modeling of SIMD concurrency generate false alarms resulting in wasted bug-hunting. They also often do not target performance bugs (non-coalesced memory accesses, memory bank conflicts, and divergent warps). We provide a new framework called GKLEE that can analyze C++ GPU programs, locating the aforesaid correctness and performance bugs. For these programs, GKLEE can also automatically generate tests that provide high coverage. These tests serve as concrete witnesses for every reported bug. They can also be used for downstream debugging, for example to test the kernel on the actual hardware. We describe the architecture of GKLEE, its symbolic virtual machine model, and describe previously unknown bugs and performance issues that it detected on commercial SDK kernels. We describe GKLEE's test-case reduction heuristics, and the resulting scalability improvement for a given coverage target. Title An Ada design pattern recognition tool for AADL performance analysis Abstract This article deals with performance verification of architecture models of real-time embedded systems. Although real-time scheduling theory provides numerous analytical methods called feasibility tests for scheduling analysis, their use is a complicated task. In order to assist an architecture model designer in early verification, we provide an approach, based on real-time specific design patterns, enabling an automatic schedulability analysis. This analysis is based on existing feasibility tests, whose selection is deduced from the compliance of the system to a design pattern and other system's properties. Those conformity verifications are integrated into a schedulability tool called Cheddar. We show how to model the relationships between design patterns and feasibility tests and design patterns themselves. Based on these models, we apply a model-based engineering process to generate, in Ada, a feasibility test selection tool. The tool is able to detect from an architecture model which are the feasibility tests that the designer can apply. We explain a method for a designer willing to use this approach. We also describe the design patterns defined and the selection algorithm. Title A trajectory correlation algorithm based on users' daily routines Abstract In recent years, there has been a change in society behavior regarding the way people interact with each other. Particularly, there is a tendency for people to change from real to virtual communities. Consequently, social opportunities are frequently missed because users have to describe manually their daily routines on virtual communities. Nevertheless, mobile social applications emerge to improve social connectivity in real communities by capturing context information of people, points of interest, and places taking into account user trajectories. In this paper, we present a Trajectory Correlation Algorithm based on Users' Daily Routines. The key idea is to provide a solution to capture daily routines in order to find related information of users and, consequently, increase social interactions in real communities. We introduce an algorithm to execute the trajectory correlation process, taking into account daily trajectories and points of interest of users. To validate our proposal, we implemented and tested a mobile social application for tracking daily routines. Besides that, we developed a plug-in on a virtual community platform to execute an optimized trajectory correlation algorithm, which is based on Minimum Bounding Rectangles (MBRs) and Hausdorff Distance. The results show that our proposal is efficient to increase social interactions in real communities by using a mobile social application and a well-known social network platform. Title Efficient incremental information flow control with nested control regions Abstract Mobile application platforms like cell phones are ubiquitous today. Even on limited devices, users expect well-performing applications that also respect the privacy of the user's stored data, such as messages, addresses and calendar items. Existing techniques, however, do not provide an adequate solution: Dynamic algorithms incur a significant space and time overhead. Static approaches help a developer in creating secure programs, but previous work requires a whole-program verification. This paper proposes a novel intermediate representation that is designed to be easily analyzed and verified by clients as well as support incremental verification. The IR can be verified with a single-pass, linear time algorithm. The resulting reduction of memory requirements is particularly important for limited mobile devices. Metadata, including security properties, can be reliably transmitted through annotatable type systems, as demonstrated by the adoption of a practical security-enhanced programming language as an input for our intermediate representation. A simplified imperative language with incremental loading is formally proved safe as a foundation for the practical implementation. Title RoleCast: finding missing security checks when you do not know what checks are Abstract Web applications written in languages such as PHP and JSP are notoriously vulnerable to accidentally omitted authorization checks and other security bugs. Existing techniques that find missing security checks in library and system code assume that (1) security checks can be recognized syntactically and (2) the same pattern of checks applies universally to all programs. These assumptions do not hold for Web applications. Each Web application uses different variables and logic to check the user's permissions. Even within the application, security logic varies based on the user's role, e.g., regular users versus administrators. This paper describes ROLECAST, the first system capable of statically identifying security logic that mediates security-sensitive events (such as database writes) in Web applications, rather than taking a specification of this logic as input. We observe a consistent software engineering pattern-the code that implements distinct user role functionality and its security logic resides in distinct methods and files-and develop a novel algorithm for discovering this pattern in Web applications. Our algorithm partitions the set of file contexts (a coarsening of calling contexts) on which security-sensitive events are control dependent into roles. Roles are based on common functionality and security logic. ROLECAST identifies security-critical variables and applies rolespecific variable consistency analysis to find missing security checks. ROLECAST discovered 13 previously unreported, remotely exploitable vulnerabilities in 11 substantial PHP and JSP applications, with only 3 false positives. This paper demonstrates that (1) accurate inference of application- and role-specific security logic improves the security of Web applications without specifications, and (2) static analysis can discover security logic automatically by exploiting distinctive software engineering features. Title On validation of ATL transformation rules by transformation models Abstract Model-to-model transformations constitute an important ingredient in model-driven engineering. As real world transformations are complex, systematic approaches are required to ensure their correctness. The ATLAS Transformation Language (ATL) is a mature transformation language which has been successfully applied in several areas. However, the executable nature of ATL is a barrier for the validation of transformations. In contrast, transformation models provide an integrated structural description of the source and target metamodels and the transformation between them. While not being executable, transformation models are well-suited for analysis and verification of transformation properties. In this paper, we discuss (a) how ATL transformations can be translated into equivalent transformation models and (b) illustrate how these surrogates can be employed to validate properties of the original transformation. Title Natural language generation from class diagrams Abstract A Platform-Independent Model (PIM) is supposed to capture the requirements specified in the Computational Independent Model (CIM). It can be hard to validate that this is the case since the stakeholders might lack the necessary training to access the information of the software models in the PIM. In contrast, a description of the PIM in natural language will enable all stakeholders to be included in the validation. We have conducted a case study to investigate the possibilities to generate natural language text from Executable and Translatable UML. In our case study we have considered a static part of the PIM; the structure of the class diagram. The transformation was done in two steps. In the first step, the class diagram was transformed into an intermediate linguistic model using Grammatical Framework. In the second step, the linguistic model is transformed into natural language text. The PIM was enhanced in such a way that the generated texts can both paraphrase the original software models as well as include the underlying motivations behind the design decisions. CCS General and reference Cross-computing tools and techniques Verification Title Automatic verification of control system implementations Abstract Software implementations of controllers for physical subsystems form the core of many modern safety-critical systems such as aircraft flight control and automotive engine control. A fundamental property of such implementations is The design of controllers for physical systems provides not only the controllers but also mathematical proofs of their stability under idealized mathematical models. Unfortunately, since these models do not capture most of the implementation details, it is not always clear if the stability properties are retained by the software implementation, either because of software bugs, or because of imprecisions arising from fixed-precision arithmetic or timing. Our methodology is based on the following separation of concerns. First, we analyze the controller mathematical models to derive bounds on the implementation errors that can be tolerated while still guaranteeing stability. Second, we automatically analyze the controller software to check if the maximal implementation error is within the tolerance bound computed in the first step. We have implemented this methodology in Costan, a tool to check stability for controller implementations. Using Costan, we analyzed a set of control examples whose mathematical models are given in Matlab/Simulink and whose C implementation is generated using Real-Time Workshop. Unlike previous static analysis research, which has focused on proving low-level runtime properties such as absence of buffer overruns or arithmetic overflows, our technique combines analysis of the mathematical controller models and automated analysis of source code to guarantee application-level stability properties. Title Using unfoldings in automated testing of multithreaded programs Abstract In multithreaded programs both environment input data and the nondeterministic interleavings of concurrent events can affect the behavior of the program. One approach to systematically explore the nondeterminism caused by input data is dynamic symbolic execution. For testing multithreaded programs we present a new approach that combines dynamic symbolic execution with unfoldings, a method originally developed for Petri nets but also applied to many other models of concurrency. We provide an experimental comparison of our new approach with existing algorithms combining dynamic symbolic execution and partial-order reductions and show that the new algorithm can explore the reachable control states of each thread with a significantly smaller number of test runs. In some cases the reduction to the number of test runs can be even exponential allowing programs with long test executions or hard-to-solve constrains generated by symbolic execution to be tested more efficiently. Title Code patterns for automatically validating requirements-to-code traces Abstract Traces between requirements and code reveal where requirements are implemented. Such traces are essential for code understanding and change management. Unfortunately, traces are known to be error prone. This paper introduces a novel approach for validating requirements-to-code traces through calling relationships within the code. As input, the approach requires an executable system, the corresponding requirements, and the requirements-to-code traces that need validating. As output, the approach identifies likely incorrect or missing traces by investigating patterns of traces with calling relationships. The empirical evaluation of four case study systems covering 150 KLOC and 59 requirements demonstrates that the approach detects most errors with 85-95% precision and 82-96% recall and is able to handle traces of varying levels of correctness and completeness. The approach is fully automated, tool supported, and scalable. Title PuMoC: a CTL model-checker for sequential programs Abstract In this paper, we present PuMoC, a CTL model checker for Pushdown systems (PDSs) and sequential C/C++ and Java programs. PuMoC allows to do CTL model-checking w.r.t simple valuations, where the atomic propositions depend on the control locations of the PDSs, and w.r.t. regular valuations, where atomic propositions are regular predicates over the stack content. Our tool allowed to (1) check 500 randomly generated PDSs against several CTL formulas; (2) check around 1461 versions of 30 Windows drivers taken from SLAM benchmarks; (3) check several C and Java programs; and (4) perform data flow analysis of real-world Java programs. Our results show the efficiency and the applicability of our tool. Title Automatically securing permission-based software by reducing the attack surface: an application to Android Abstract In the permission-based security model (used e.g. in Android and Blackberry), applications can be granted more permissions than they actually need, what we call a “permission gap”. Malware can leverage the unused permissions for achieving their malicious goals, for instance using code injection. In this paper, we present an approach to detecting permission gaps using static analysis. Using our tool on a dataset of Android applications, we found out that a non negligible part of applications suffers from permission gaps, i.e. does not use all the permissions they declare. Title Predicting common web application vulnerabilities from input validation and sanitization code patterns Abstract Software defect prediction studies have shown that defect predictors built from static code attributes are useful and effective. On the other hand, to mitigate the threats posed by common web application vulnerabilities, many vulnerability detection approaches have been proposed. However, finding alternative solutions to address these risks remains an important research problem. As web applications generally adopt input validation and sanitization routines to prevent web security risks, in this paper, we propose a set of static code attributes that represent the characteristics of these routines for predicting the two most common web application vulnerabilities—SQL injection and cross site scripting. In our experiments, vulnerability predictors built from the proposed attributes detected more than 80% of the vulnerabilities in the test subjects at low false alarm rates. Title Software defect prediction using semi-supervised learning with dimension reduction Abstract Accurate detection of fault prone modules offers the path to high quality software products while minimizing non essential assurance expenditures. This type of quality modeling requires the availability of software modules with known fault content developed in similar environment. Establishing whether a module contains a fault or not can be expensive. The basic idea behind semi-supervised learning is to learn from a small number of software modules with known fault content and supplement model training with modules for which the fault information is not available. In this study, we investigate the performance of semi-supervised learning for software fault prediction. A preprocessing strategy, multidimensional scaling, is embedded in the approach to reduce the dimensional complexity of software metrics. Our results show that the semi-supervised learning algorithm with dimension-reduction preforms significantly better than one of the best performing supervised learning algorithms, random forest, in situations when few modules with known fault content are available for training. Title Arcade.PLC: a verification platform for programmable logic controllers Abstract This paper introduces Arcade.PLC, a verification platform for programmable logic controllers (PLCs). The tool supports static analysis as well as ACTL and past-time LTL model checking using counterexample-guided abstraction refinement for different programming languages used in industry. In the underlying principles of the framework, knowledge about the hardware platform is exploited so as to provide efficient techniques. The effectiveness of the approach is evaluated on programs implemented using a combination of programming languages. Title User-aware privacy control via extended static-information-flow analysis Abstract Applications in mobile-marketplaces may leak private user information without notification. Existing mobile platforms provide little information on how applications use private user data, making it difficult for experts to validate applications and for users to grant applications access to their private data. We propose a user-aware privacy control approach, which reveals how private information is used inside applications. We compute static information flows and classify them as safe/unsafe based on a tamper analysis that tracks whether private data is obscured before escaping through output channels. This flow information enables platforms to provide default settings that expose private data only for safe flows, thereby preserving privacy and minimizing decisions required from users. We built our approach into TouchDevelop, an application-creation environment that allows users to write scripts on mobile devices and install scripts published by other users. We evaluate our approach by studying 546 scripts published by 194 users. Title Unbounded data model verification using SMT solvers Abstract The growing influence of web applications in every aspect of society makes their dependability an immense concern. A fundamental building block of web applications that use the Model-View-Controller (MVC) pattern is the data model, which specifies the object classes and the relations among them. We present an approach for unbounded, automated verification of data models that 1) extracts a formal data model from an Object Relational Mapping, 2) converts verification queries about the data model to queries about the satisfiability of formulas in the theory of uninterpreted functions, and 3) uses a Satisfiability Modulo Theories (SMT) solver to check the satisfiability of the resulting formulas. We implemented this approach and applied it to five open-source Rails applications. Our results demonstrate that the proposed approach is feasible, and is more efficient than SAT-based bounded verification. CCS Hardware Printed circuit boards Electromagnetic interference and compatibility CCS Hardware Printed circuit boards PCB design and layout CCS Hardware Communication hardware, interfaces and storage Signal processing systems CCS Hardware Communication hardware, interfaces and storage Sensors and actuators Title Robotic swarm cooperation by co-adaptation Abstract This paper presents a framework for co-adapting mobile sensors in hostile environments to allow telepresence of a distant user. The presented technique relies on cooperative co-evolution for sensor placement. It is shown that cooperative co-evolution is able to find simultaneously the required number of sensors to observe a given environment and a configuration that is consistently better than other well know optimization algorithms. Moreover, it is presented that co-evolution is also able to quickly reach a new configuration when the environment changes. NA Title Co-adapting mobile sensor networks to maximize coverage in dynamic environments Abstract With recent advances in mobile computing, swarm robotics has demonstrated its utility in countless situations like recognition, surveillance, and search and rescue. This paper presents a novel approach to optimize the position of a swarm of robots to accomplish sensing tasks based on cooperative co-evolution. Results show that the introduced cooperative method simultaneously finds the right number of sensors while also optimizing their positions in static and dynamic environments. Title Advances in tactile sensing and touch based human-robot interaction Abstract The problem of "providing robots with the sense of touch" is fundamental in order to develop the next generations of robots capable of interacting with humans in different contexts: in daily housekeeping activities, as working partners or as caregivers, just to name a few. In a low-level perspective, through tactile sensing it is possible to measure or estimate physical properties of manipulated or touched objects, whereas feedback from tactile sensors may enable the detection and safe control of the interaction between the robot and objects or humans. In a high-level perspective, touch-based cognitive processes can be entailed by developing robot body self-awareness capabilities and by differentiating the "self" from the "external space", thereby opening new relevant problems in Robotics. The objective of this Workshop is to present and discuss the most recent achievements in the area of tactile sensing starting from the technological aspects, up to the application problems where tactile feedback plays a fundamental role. The Workshop will cover, but will not be limited, to the following three areas: Title Ekho: bridging the gap between simulation and reality in tiny energy-harvesting sensors Abstract Harvested energy makes long-term maintenance-free sensor deployments possible; however, as devices shrink in order to accommodate new applications, tightening energy budgets and increasing power supply volatility leaves system designers poorly equipped to predict how their devices will behave when deployed. This paper describes the design and initial FPGA-based implementation of Ekho, a tool that records and emulates energy harvesting conditions, in order to support realistic and repeatable testing and experimentation. Ekho uses the abstraction of I-V curves---curves that describe harvesting current with respect to supply voltage---to accurately represent harvesting conditions, and supports a range of harvesting technologies. An early prototype emulates I-V curves with 0.1mA accuracy, and responds in 4.4μ Title Knowledge discovery from sensor data (SensorKDD) Abstract Sensor data is being collected at an unprecedented rate across a variety of domains from a broad spectrum of sources, such as wide-area sensor infrastructures, remote sensing instruments, RFIDs, and wireless sensor networks. With the recent proliferation of smart-phones, and similar GPS enabled mobile devices, collection of sensor data is no longer limited to scientific communities, but has reached general public. With massive volumes of such disparate, dynamic, and geographically distributed data available, many high-priority applications have been identified that involve analysis of such data to solve real world problems such as understanding climate change and its impacts, electric grid monitoring, disaster preparedness and management, national or homeland security, and the management of critical infrastructures. Given the unique characteristics of sensor data, particularly its spatiotemporal nature and presence of constraints associated with the data collection and computational resources, there have been many research efforts to analyze the sensor data which build upon the general research in the data mining community but are significantly different in terms of how they address the specific challenges encountered when dealing with sensor data. In particular, the raw data from sensors needs to be efficiently managed and transformed to usable information through data fusion, which in turn must be converted to predictive insights via knowledge discovery, ultimately facilitating automated or humaninduced tactical decisions or strategic policy based on decision sciences and decision support systems. Keeping in view the requirements of the emerging field of knowledge discovery from sensor data, we took initiative to develop a community of researchers with common interests and scientific goals, which culminated into the organization of SensorKDD series of workshops in conjunction with the prestigious ACM SIGKDD International Conference of Knowledge Discovery and Data Mining. In this report, we summarize events at the Fourth ACM-SIGKDD International Workshop on Knowledge Discovery form Sensor Data (SensorKDD 2010). Title Integration of a low-cost RGB-D sensor in a social robot for gesture recognition Abstract An objective of natural Human-Robot Interaction (HRI) is to enable humans to communicate with robots in the same manner humans do between themselves. This includes the use of natural gestures to support and expand the information that is exchanged in the spoken language. To achieve that, robots need robust gesture recognition systems to detect the non-verbal information that is sent to them by the human gestures. Traditional gesture recognition systems highly depend on the light conditions and often require a training process before they can be used. We have integrated a low-cost commercial RGB-D (Red Green Blue - Depth) sensor in a social robot to allow it to recognise dynamic gestures by tracking a skeleton model of the subject and coding the temporal signature of the gestures in a FSM (Finite State Machine). The vision system is independent of low light conditions and does not require a training process. Title Neural network based sensor drift compensation of induction motor Abstract In this paper, sensor drift compensation of vector control of induction motor using neural network is presented. An induction motor is controlled based on vector control. The sensors sense the primary feedback signals for the feedback control system which is processed by the controller. Any fault in the sensors cause incorrect measurements of feedback signals due to malfunction in sensor circuit elements which affects the system performance. Hence, sensor fault compensation or drift compensation is important for an electric drive. Analysis of sensor drift compensation in motor drives is done using neural networks. The feedback signals from the phase current sensors are given as the neural network input. The neural network then performs the auto-associative mapping of these signals so that its output is an estimate of the sensed signals. Since the Auto-associative neural network exploits the physical and analytical redundancy, whenever a sensor starts to drift, the drift is compensated at the output, and the performance of the drive system is barely affected. Title A fluid-suspension, electromagnetically driven eye with video capability for animatronic applications Abstract (Our work of the same title was initially published at "Humanoid '09" in Paris France, and should be referred to for details). We have prototyped a compact, fluid-suspension, electromagnetically-rotated animatronic eye. The Eye has no external moving parts, features low operating power, a range of motion and saccade speeds that can exceed that of the human eye, and an absence of frictional wear points. It supports a rear, stationary, video camera. In a special application, the eye can be separated into a hermetically sealable portion that might be used as a human eye prosthesis along with an extra-cranially-mounted magnetic drive. Title Software verification for TinyOS Abstract We describe the first software tool for the Title Context-aware robot navigation based on sensor association rules Abstract Within the mobile robotics research community, a great many approaches have been proposed for solving the navigation problem. The key difference between these various navigation architectures is the manner in which they decompose the problem into smaller subunits. In this paper, a data mining methodology developed for the retrieving significant frequent patterns is extended to allow robots to learn and navigate on unknown terrain in natural way. The method has two phases: context identification phase and validation phase. Conjunction of those phases provides an easy and straightforward way for exploring new workings space for robots. CCS Hardware Communication hardware, interfaces and storage Buses and high-speed links Title A hybrid NoC design for cache coherence optimization for chip multiprocessors Abstract On chip many-core systems, evolving from prior multi-processor systems, are considered as a promising solution to the performance scalability and power consumption problems. The long communication distance between the traditional multi-processors makes directory-based cache coherence protocols better solutions compared to bus-based snooping protocols even with the overheads from indirections. However, much smaller distances between the CMP cores enhance the reachability of buses, revitalizing the applicability of snooping protocols for cache-to-cache transfers. In this work, we propose a hybrid NoC design to provide optimized support for cache coherency. In our design, on-chip links can be dynamically configured as either point-to-point links between NoC nodes or short buses to facilitate localized snooping. By taking advantage of the best of both worlds, bus-based snooping coherency and NoC-based directory coherency, our approach brings both power and performance benefits. Title A novel hybrid FIFO asynchronous clock domain crossing interfacing method Abstract Multi-clock domain circuits with Clock Domain Crossing (CDC) interfaces are emerging as an alternative to circuits with a global clock. CDC interfaces are susceptible to metastability, hence their design is very challenging. This paper presents a hybrid FIFO-asynchronous method for constructing robust CDC interfaces. The proposed design can handle arbitrary clock frequency ratios between the sender and receiver with random phase shifts. The proposed design avoids latency due to synchronizers with the asynchronous protocol modifications. Circuit simulation results confirm the operation and robustness of the design at maximum workloads, and arbitrary frequency ratios, over a temperature range of -50 to 50 degrees Celsius. The interface offers a maximum throughput of 606 million transfers per second without pausing the clock. Title An optimized multicore cache coherence design for exploiting communication locality Abstract Supporting cache coherence in current multicore processor still faces scalability and performance problems. This paper presents an optimized cache coherence design targeting at NoC-based multicore processors. It tries to achieve the best characteristics both of the snooping and of the directory-based protocols. With the observation of network traffic locality, we design a cache coherence that aims at local and remote access separately. At the first level, snooping is achieved within a cache group and at the second level of the protocol, the coarse directories provide the caches with information about which processors must be involved in first level snooping. To support efficient coherence broadcasting, we also propose a low latency, broadcast-enabled underlying NoC design. It incorporates light weight buses into NoCs, where the snooping protocol can be performed in a broadcast fashion. Extensive experimental results demonstrate that the proposed coherence design can achieve low complexity and high performance goals. Title FIOS: a flexible virtualized I/O subsystem to alleviate interference among virtual machines Abstract Serving as the infrastructure of cloud computing, virtualization technologies have attracted considerable interest in recent years for their excellent resource utility, scalability, and high availability. Title PRO3D: programming for future 3D manycore architectures Abstract PRO3D tackles two 3D technologies and their consequences on stacked architectures and software stack: through silicon vias (TSV) and liquid cooling. 3D memory hierarchies and the thermal impact of software on the 3D stack are mainly explored. The PRO3D software development flow is based on a rigorous assembly of software components and monitors the thermal integrity of the 3D stack. PRO3D experiments are mainly targeted on P2012, an industrial embedded manycore platform. Title Modality switching and performance in a thought and speech controlled computer game Abstract Providing multiple modalities to users is known to improve the overall performance of an interface. Weakness of one modality can be overcome by the strength of another one. Moreover, with respect to their abilities, users can choose between the modalities to use the one that is the best for them. In this paper we explored whether this holds for direct control of a computer game which can be played using a brain-computer interface (BCI) and an automatic speech recogniser (ASR). Participants played the games in unimodal mode (i.e. ASR-only and BCI-only) and multimodal mode where they could switch between the two modalities. The majority of the participants switched modality during the multimodal game but for the most of the time they stayed in ASR control. Therefore multimodality did not provide a significant performance improvement over unimodal control in our particular setup. We also investigated the factors which influence modality switching. We found that performance and peformance-related factors were prominently effective in modality switching. Title User expectations and experiences of a speech and thought controlled computer game Abstract Brain-computer interfaces (BCIs) are often evaluated in terms of performance and seldom for usability. However in some application domains, such as entertainment computing, user experience evaluation is vital. User experience evaluation in BCI systems, especially in entertainment applications such as games, can be biased due to the novelty of the interface. However, as the novelty will eventually vanish, what matters is the user experience related to the unique features offered by BCI. Therefore it is a viable approach to compare BCI to other novel modalities, such as a speech or motion recogniser, rather than the traditional mouse and keyboard. In the study which we present in this paper, our participants played a computer game with a BCI and an automatic speech recogniser (ASR) and they rated their expectations and experiences for both modalities. Our analysis on subjective ratings revealed that both ASR and BCI were successful in satisfying participants' expectations in general. Participants found speech control easier to learn than BCI control. They indicated that BCI control induced more fatigue than they expected. Title A NoC system generator for the Sea-of-Cores era Abstract Multi-core systems are getting bigger. The number of cores is doubling every 18 months, in corollary with the reformulated Moore's law. Soon, the number of cores that can be integrated together in a system will be so large, that it is appropriate to talk about a new SoC design paradigm, the Sea-of-Cores era. This development will not end, even when CMOS cannot be made any smaller. Instead, with the development of Through-Silicon Vias (TSVs), chips will be stacked in 3D, promising continuous scaling for a very long time ahead. As systems grow, programming and debugging of them will become harder. Methods for generating the systems from higher-level specifications will be necessary to manage design complexity. Also, there will be so many processors to be programmed, that the SW also will have to be automatically generated and distributed, much in the same way as a synthesis and place & route tool is doing today for HW. In this paper, we present a NoC generator that can generate an arbitrarily large Multi-core platform from an XML configuration file, targeted for single-chip FPGA platforms. The NoC generator also generates a device driver prototype together with a small test program that can be used as a template for creating larger programs. Title AdNoC case-study for Mpeg4 benchmark: improving performance and saving energy with an adaptive NoC Abstract The Network-on-Chip (NoC) topology for Multi Processor System-on-Chip (MPSoC) is a key factor for power consumption and communication time. In this work, we propose a NoC architecture that can adapt itself during run-time according to traffic patterns, based on an external control that changes the topology chosen. As a function of the application, the router connections can change from mesh to irregular topology (and vice-versa) to improve communication time and to save energy. This approach can improve the performance of an application under different traffic conditions. For the Mpeg4 case-study application is possible to decrease the communication time in 16%, saving around 62% in energy by choosing the right topology at each communication phase of the application. The extra area required for the adaptability is compensated by zero redesign costs, making it possible to reuse the NoC for any behavior without significant penalties. Title Multi-objective topology synthesis and FPGA prototyping framework of application specific network-on-chip Abstract Network-on-Chip (NoC) topology synthesis problem targets to generate NoC topology for multiple system design objectives such as performance and area. A multi-objective NoC synthesis and prototyping framework based on FPGA platform is proposed to design application specific NoC. Using the multi-objective algorithm NSGAII, the workflow aims to supply Pareto solutions for the multiple design objectives, rather than one single objective subset, so that designers can make flexible decisions according to different design objectives and budgets. This multi-objective NoC synthesis is ensured by our complete TLM and RTL levels design framework. All the routers in the NoC library are synthesized for RTL implementation, and the area utilization information is used for hardware resource estimation at the high level NoC synthesis step. The system performances are obtained by SystemC TLM simulation with traffic defined in the core graph. After the high level NoC synthesis, the final selected Pareto solutions are generated and prototyped on FPGA platform with RTL traffic generators with same configurations as TLM level. Our multi-objective framework aims to provide a bridge from high level model to FPGA execution for accurate NoC design. Experiments on multimedia benchmark applications demonstrate the efficiency of this method. CCS Hardware Communication hardware, interfaces and storage Displays and imagers Title Printing reflectance functions Abstract The reflectance function of a scene point captures the appearance of that point as a function of lighting direction. We present an approach to printing the reflectance functions of an object or scene so that its appearance is modified correctly as a function of the lighting conditions when viewing the print. For example, such a “photograph” of a statue printed with our approach appears to cast shadows to the right when the “photograph” is illuminated from the left. Viewing the same print with lighting from the right will cause the statue's shadows to be cast to the left. Beyond shadows, all effects due to the lighting variation, such as Lambertian shading, specularity, and inter-reflection can be reproduced. We achieve this ability by geometrically and photometrically controlling specular highlights on the surface of the print. For a particular viewpoint, arbitrary reflectance functions can be built up at each pixel by controlling only the specular highlights and avoiding significant diffuse reflections. Our initial binary prototype uses halftoning to approximate continuous grayscale reflectance functions. Title SplashDisplay Abstract 'SplashDisplay' is a system developed to attempt real time volumetric display. This system implements air pressure generated by an x-y coordinate based projectile launching speaker through a bed of projectile beads to simulate a real-time 3D "explosion" like effect. The projectile beads act as a projection medium for a top-mounted visible light projector; and through synchronized timing of these components, it is possible to create 3D, tangible effects at will. Also, by using IR LED and IR sensitive cameras, user interaction can be added to this system to allow for an interactive surface. The result of these components is a dynamic, interactive, real-time "explosion" simulation game that can be used to confirm the innovative construct of this suggested system. Title Embedded soft material displays Abstract This paper investigates methods of fabricating organic displays based on hybrid material composition. Our primary research focuses on controlling heat activated inks (thermochromic) combined with invisible, embedded electronics. We demonstrate our process through Title C1x6: a stereoscopic six-user display for co-located collaboration in shared virtual environments Abstract Stereoscopic multi-user systems provide multiple users with individual views of a virtual environment. We developed a new projection-based stereoscopic display for six users, which employs six customized DLP projectors for fast time-sequential image display in combination with polarization. Our intelligent high-speed shutter glasses can be programmed from the application to adapt to the situation. For instance, it does this by staying open if users do not look at the projection screen or switch to a VIP high brightness mode if less than six users use the system. Each user is tracked and can move freely in front of the display while perceiving perspectively correct views of the virtual environment. Navigating a group of six users through a virtual world leads to situations in which the group will not fit through spatial constrictions. Our augmented group navigation techniques ameliorate this situation by fading out obstacles or by slightly redirecting individual users along a collision-free path. While redirection goes mostly unnoticed, both techniques temporarily give up the notion of a consistent shared space. Our user study confirms that users generally prefer this trade-off over naïve approaches. Title 3D visual illusion interpretation Abstract Our initial research based on a combination of personal interests such as holographic art [Cole and Hayward 1995], 3D computer graphics and visual illusions led us to a theoretical study on a cross-platform application designed to quickly produce 3D raw meshes from 2D bitmap interference rings mainly for personal usage. But the keynote to this article is to demonstrate how big a simple idea can grow. Early stages of this research is a manual image construction obtained through visual illusions and mental images due to the faculty that our brain has to fill what is not visible by the nearest memory [Luzy 1973]. The artistic aspect is that the result is totally random. The issue now is how to digitally translate the above faculty and randomness. Title Paint color control system with infrared photothermal conversion Abstract In this paper, we describe our novel image display technology to control the paint color of everyday physical objects. RGB (Red-Green-Blue), an additive color mixture of light's three primary colors is used to create images in screens and display while CMYK (Cyan-Magenta-Yellow-Black), a subtractive color model is specialized to printing to papers. However, here we propose the system that uses CMYK color mode and digitally controls images by a chromogenic method of thermochromic inks. Due to the temperature sensitive property of these inks, this technology can digitally change painted colors of physical objects dynamically by changing temperatures. In order to achieve the high resolution and low-power color control system, infrared LED (Light Emitting Diode) was used as the device to control these inks with photothermal conversion. Through the development of the system and its some applications, we introduce our vision of "CM YK display". Title Dynamic voltage scaling of OLED displays Abstract Unlike liquid crystal display (LCD) panels that require high-intensity backlight, organic LED (OLED) display panels naturally consume low power and provide high image quality thanks to their self-illuminating characteristic. In spite of this fact, the OLED display panel is still the dominant power consumer in battery-operated devices. As a result, there have been many attempts to reduce the OLED power consumption. Since power consumption of any pixel of the OLED display depends on the color that it displays, previous power saving methods change the pixel color subject to a tolerance level on the color distortion specified by the users. In practice, the OLED power saving techniques cannot be used on common user applications such as photo viewers and movie players. This paper introduces the first OLED power saving technique that does not result in a significant degradation in the color and luminance values of the displayed image. The proposed technique is based on dynamic (driving) voltage scaling (DVS) of the OLED panel. Although the proposed DVS technique may degrade luminance of the panel, the panel luminance can be restored with appropriate image compensation. Consequently, power is saved on the OLED display panel with only minor changes in the color and luminance of the image. This technique is similar to dynamic backlight scaling of LCDs, but is based on the unique characteristics of the OLED drivers. The proposed method saves wasted power in the driver transistor and the internal resistance with an amplitude modulation driver, and in the internal resistance with a pulse width modulation driver, respectively. Experimental results show that the proposed OLED DVS with image compensation technique saves up to 52.5% of the OLED power while keeping the same human-perceived image quality for the Lena image. Title Designing a multi-purpose capacitive proximity sensing input device Abstract The recent success of Nintendo's Wii and multi-touch input devices like the Apple iPhone clearly shows that people are more willing to accept new input device-technologies based on intuitive forms of interaction. Gesture-based input is thus becoming important and even relevant in specific application scenarios. A sensor type especially suited for natural gesture recognition is the capacitive proximity sensor that allows the detection of objects without any physical contact. In this paper we extend the input device taxonomy by Card et al to include this detector category and allow modeling of devices based on advanced sensor units that involve data processing. We have created a prototype based on this modeling and evaluated its use regarding several application scenarios, where such a device might be useful. The focus of this evaluation was to determine the suitability of the device for different interaction paradigms. Title Smart glasses linking real live and social network's contacts by face recognition Abstract Imagine you participate in a big meeting with several people remotely known to you. You remember their faces but not their names. This is where "Smart Glasses" supports you: Smart Glasses consist of a (wearable) display, a tiny camera, some local processing power and an uplink to a backend service. The current implementation is based on Android and runs on smartphones, early research prototypes with different types of wearable displays have been evaluated as well. The system executes face detection and face tracking locally on the device (e.g. smartphone) and then links to the service running in the cloud to perform the actual face recognition based on the user's personal contact list (gallery). Recognized and identified persons are then displayed with names and latest social network activities. The approach is directed towards an AR ecosystem for mobile use. Therefore, open interfaces on the device are provided as well as to the service backend. We intend to take today's location based AR systems one step further towards computer vision based AR to really fit the needs of today's and tomorrow's users. Title Light reallocation for high contrast projection using an analog micromirror array Abstract We demonstrate for the first time a proof of concept projector with a secondary array of individually controllable, analog micromirrors added to improve the contrast and peak brightness of conventional projectors. The micromirrors reallocate the light of the projector lamp from the dark parts towards the light parts of the image, before it reaches the primary image modulator. Each element of the analog micromirror array can be tipped/tilted to divert portions of the light from the lamp in two dimensions. By directing these mirrors on an image-dependent basis, we can increase both the peak intensity of the projected image as well as its contrast. In this paper, we describe and analyze the optical design for projectors using this light reallocation approach. We also discuss software algorithms to compute the best light reallocation pattern for a given input image, using the constraints of real hardware. We perform extensive simulations of this process to evaluate image quality and performance characteristics of this process. Finally, we present a first proof-of-concept implementation of this approach using a prototype analog micromirror device. CCS Hardware Communication hardware, interfaces and storage External storage Title Understanding performance anomalies of SSDs and their impact in enterprise application environment Abstract SSD is known to have the erase-before-write and out-of-place update properties. When the number of invalidated pages is more than a given threshold, a process referred to as garbage collection (GC) is triggered to erase blocks after valid pages in these blocks are copied somewhere else. GC degrades both the performance and lifetime of SSD significantly because of the read-write-erase operation sequence. In this paper, we conduct intensive experiments on a 120GB Intel 320 SATA SSD and a 320GB Fusion IO ioDrive PCI-E SSD to show and analyze the following important performance issues and anomalies. The commonly accepted knowledge that the performance drops sharply as more data is being written is not always true. This is because GC efficiency, a more important factor affecting SSD performance, has not been carefully considered. It is defined as the percentage of invalid pages of a GC erased block. It is possible to avoid the performance degradation by managing the addressable LBA range. Estimating the residual lifetime of an SSD is a very challenging problem because it involves several interdependent and mutually interacting factors such as FTL, GC, wear leveling, workload characteristics, etc. We develop an analytical model to estimate the residual lifetime of a given SSD. The high random-read performance is widely accepted as one of the advantages of SSD. We will show that this is not true if the GC efficiency is low. Title Hardware/software architecture for flash memory storage systems Abstract This tutorial deals with various hardware/software issues in designing and implementing flash memory storage systems. It will be split into three parts - the first part is on flash memory internals and flash memory management software called the flash translation layer, the second on solid state disks that emulate hard disk drives using flash memory, and finally the third on reliability issues arising from various asynchronous/synchronous faults. Title Disk Scrubbing Versus Intradisk Redundancy for RAID Storage Systems Abstract Two schemes proposed to cope with unrecoverable or latent media errors and enhance the reliability of RAID systems are examined. The first scheme is the established, widely used, disk scrubbing scheme, which operates by periodically accessing disk drives to detect media-related unrecoverable errors. These errors are subsequently corrected by rebuilding the sectors affected. The second scheme is the recently proposed intradisk redundancy scheme, which uses a further level of redundancy inside each disk, in addition to the RAID redundancy across multiple disks. A new model is developed to evaluate the extent to which disk scrubbing reduces the unrecoverable sector errors. The probability of encountering unrecoverable sector errors is derived analytically under very general conditions regarding the characteristics of the read/write process of uniformly distributed random workloads and for a broad spectrum of disk scrubbing schemes, which includes the deterministic and random scrubbing schemes. We show that the deterministic scrubbing scheme is the most efficient one. We also derive closed-form expressions for the percentage of unrecoverable sector errors that the scrubbing scheme detects and corrects, the throughput performance, and the minimum scrubbing period achievable under operation with random, uniformly distributed I/O requests. Our results demonstrate that the reliability improvement due to disk scrubbing depends on the scrubbing frequency and the load of the system, and, for heavy-write workloads, may not reach the reliability level achieved by a simple interleaved parity-check (IPC)-based intradisk redundancy scheme, which is insensitive to the load. In fact, for small unrecoverable sector error probabilities, the IPC-based intradisk redundancy scheme achieves essentially the same reliability as that of a system operating without unrecoverable sector errors. For heavy loads, the reliability achieved by the scrubbing scheme can be orders of magnitude less than that of the intradisk redundancy scheme. Finally, the I/O and throughput performances are evaluated by means of analysis and event-driven simulation. Title Instant power-on nonvolatile FPGA based on MTJ/MOS-hybrid circuitry Abstract Title Sector log: fine-grained storage management for solid state drives Abstract Although NAND flash-based solid-state drives (SSDs) excel magnetic disks in several aspects, the costs of write operations have been limiting their performance. The overheads of write operations are exacerbated by the fixed write unit (page) of flash memory, which is much larger than the sector size in magnetic disks. A write request from a file system, with a data size smaller than a page, becomes a full page write in SSDs. With the page size hidden internally in SSDs, file systems and applications may not be optimized to a fixed page size. Furthermore, to increase the density and bandwidth of flash memory, page sizes in SSDs have been increasing. In this paper, we propose a sector-level data management mechanism for SSDs, called Title Impact of flash memory on video-on-demand storage: analysis of tradeoffs Abstract There is no doubt that video-on-demand (VoD) services are very popular these days. However, disk storage is a serious bottleneck limiting the scalability of a VoD server. Disk throughput degrades dramatically due to seek time overhead when the server is called upon to serve a large number of simultaneous video streams. To address the performance problem of disk, buffer cache algorithms that utilize RAM have been proposed. Interval caching is a state-of-the-art caching algorithm for a VoD server. Title Distribution log buffer scheme for NAND flash memory Abstract Since flash memory has useful characteristics, it has been used in a variety of areas recently. However, it has a critical weakness known as "erase before write". Because of this feature, systems using flash memory have experienced serious performance degradation. Many efforts have been made to solve this problem, focusing on a number of Flash Translation Layers (FTL). In this paper, we propose a log buffer-based management scheme incorporating a new method using distribution that many update data are separated to log blocks without least recently used (LRU) policy. This scheme is to decrease the degree of association of log blocks associated with data blocks. And also, we propose a log block state in order to achieve fast access to log blocks. Title A study on the block fragmentation problem of ssd based on NAND flash memory Abstract A Solid State Disk (SSD) has recently occupied attentions as the next generation of memory media. Among researches of various technologies to increase the performance of the SSD, a parallel method has the most effect on the performance improvement. Thus, the SSD uses logical address striping technique. It contributes to a sharp increase of the feature of being parallel while some mapping methods might cause a problem in the use of block. This paper will verify the issues of changes of a block to fragments caused when the methods of logical address, striping and of hybrid mapping are used at the same time, and analyze impacts of the fragment issue on a block exploitation and ability. Title Modelling flash devices with FDR: progress and limits Abstract We present our experience of working with the Failures-Divergence Refinement (FDR) toolkit while extending our modelling of the behaviour of Flash Memory. This effort is a step towards the low-level modelling of data-storage technology that is the target of the POSIX filestore minichallenge. The key objective was to advance previous work presented in [4, 2] to cover the full Open Nand-Flash Interface (ONFi) 2.1 model. The previous work covered a sub-model of the mandatory features of ONFi 1.0. The FDR toolkit was used for refinement/model-checking. In addition to the compression techniques available in FDR, we also experimented with FDR Explorer - an application-programming interface (API) that allowed us to get a better picture of FDR performance. This paper summarises the progress we made, and the limits we encountered. We are now able to verify many of the operations in ONFi 2.1 model using full Failures-Divergence refinement checking, rather than just trace refinement. Through the use of compression techniques available in the FDR toolkit and in particular by hiding the events deeper in the model, we were able to get compression of the state-space. The work also reports the number of attempts to compile the full ONFi 2.1 model. NA Title Combined magnetic- and circuit-level enhancements for the nondestructive self-reference scheme of STT-RAM Abstract A nondestructive self-reference read scheme (NSRS) was recently proposed to overcome the bit-to-bit variation in Spin-Transfer Torque Random Access Memory (STT-RAM). In this work, we introduced three magnetic- and circuit-level techniques, including 1) R-I curve skewing, 2) yield-driven sensing current selection, and 3) ratio matching to improve the sense margin and robustness of NSRS. The measurements of our 16Kb STT-RAM test chip show that compared to the original NSRS design, our proposed technologies successfully increased the sense margin by 2.5X with minimized impacts on the memory reliability and hardware cost. CCS Hardware Communication hardware, interfaces and storage Networking hardware CCS Hardware Communication hardware, interfaces and storage Printers Title Printing reflectance functions Abstract The reflectance function of a scene point captures the appearance of that point as a function of lighting direction. We present an approach to printing the reflectance functions of an object or scene so that its appearance is modified correctly as a function of the lighting conditions when viewing the print. For example, such a “photograph” of a statue printed with our approach appears to cast shadows to the right when the “photograph” is illuminated from the left. Viewing the same print with lighting from the right will cause the statue's shadows to be cast to the left. Beyond shadows, all effects due to the lighting variation, such as Lambertian shading, specularity, and inter-reflection can be reproduced. We achieve this ability by geometrically and photometrically controlling specular highlights on the surface of the print. For a particular viewpoint, arbitrary reflectance functions can be built up at each pixel by controlling only the specular highlights and avoiding significant diffuse reflections. Our initial binary prototype uses halftoning to approximate continuous grayscale reflectance functions. Title Designing a multi-purpose capacitive proximity sensing input device Abstract The recent success of Nintendo's Wii and multi-touch input devices like the Apple iPhone clearly shows that people are more willing to accept new input device-technologies based on intuitive forms of interaction. Gesture-based input is thus becoming important and even relevant in specific application scenarios. A sensor type especially suited for natural gesture recognition is the capacitive proximity sensor that allows the detection of objects without any physical contact. In this paper we extend the input device taxonomy by Card et al to include this detector category and allow modeling of devices based on advanced sensor units that involve data processing. We have created a prototype based on this modeling and evaluated its use regarding several application scenarios, where such a device might be useful. The focus of this evaluation was to determine the suitability of the device for different interaction paradigms. Title Print centers: navigating the sea of ink Abstract At the University of Oregon, School of Architecture and Allied Arts we use a Keep-It-Simple-Sailor (KISS) approach to managing our large-format academic printing services to campus. By limiting options and providing simple straightforward documentation we are able to provide large-format printing to a large number of customers with a very short turnaround. Our customers know what they can expect from our services and as a result have a fairly high degree of customer satisfaction. What more can you hope for? Well that's what we would like to determine through a panel discussion with other large-format print center managers. What large-format print services are peer institutions offering to academic units? What challenges are they experiencing implementing those services? Are these challenges unique to that institution, or do others have solutions that work. Are your customers about to mutiny? Our experiences may be able to help. Together maybe we can avoid the rocks in a sea of ink and bring us all to a safe harbor. Title Workshop on coupled display visual interfaces Abstract Interactive displays are increasingly distributed in a broad spectrum of everyday life environments: They have very diverse form factors and portability characteristics, support a variety of interaction techniques, and can be used by a variable number of people. The coupling of multiple displays can thus create interactive "ecosystems" which mingle in the social context, and generate novel settings of communication, performance and ownership. The objective of this workshop is to focus on the range of research challenges and opportunities afforded by applications that rely on visual interfaces that can spread across multiple displays. Such displays are physically Title Bippity, boppity, boo: magical presentations using color and large format printing Abstract IT User Services at the University of Delaware offers students, faculty and staff large-format color printing to enhance presentations and graphic communication in a cost-effective manner. We offer support for their projects from start to finish. Extensive documentation and one-on-one help is available to assist users in designing their documents for large format color printing, configure their print drivers, and create and view their documents as PDF files before printing the final copy. The response to printing large-format color documents on campus has been overwhelming. We have seen an increase in volume due to contests, advertising for departments, campus-wide campaigns, conferences, and course-specific projects. Our convenient location, inexpensive pricing, and extensive support all contribute to the success of the service. This poster session will cover the hardware and software evaluation process, configuration, printing material selection and costs, cost comparisons and recovery, and ongoing support for this service. Title The design space of input devices Abstract NA 56 Citations Title Touchscreen field specification for public access database queries: let your fingers do the walking Abstract Title The multi-Media workstation Abstract Good afternoon, ladies and gentlemen. Thank you very much for taking time out from the parties to join us for one of the peripheral activities of SIGGRAPH. As you know, the panel that we're going to be holding this afternoon is entitled the Multi-Media Workstation. Before I make some introductory remarks, I am required to make some administrative remarks. The first thing is to remind you that the proceedings of all of the panels are being audio taped this year for subsequent transcription and publication. What that means, is that when we have the audience interaction, please come to the microphones that are scattered around the floor to make your remarks. Otherwise, I won't be able to recognize you. The second thing I want to mention to you is that when we're done at 5:15 we are going to vacate the stage. We're going to vacate the room so the AV people can lock up. If you want to continue discussion with us, there's a breakout room that's been set aside, Salon J, which is down around the corner. So join us there please, because we'll be scooting out of here right away. Finally, I need to tell you that the --- for those of you who are involved --- the Pioneers Reception will be held between 6:00 and 9:00 at the Computer Museum this evening, and buses will leave from the Boylston Street exit of the Convention Center at 5:00, 5:30 and 6:00. Absolutely no video or audio taping allowed at the Pioneers. You don't want to hear any of those old reminiscences repeated. Let's get on with the business of the afternoon. Multi-Media Workstations. A couple of preliminary remarks that I think all of my colleagues up here will agree with. The things that we're going to be discussing this afternoon do not represent fundamentally new technologies. You've been able to buy add-in video cards and audio devices for personal computers and workstations for some years now. What we are going to be addressing is a confluence of many technologies --- hardware and software --- that has finally made it possible to envision a fully integrated system that will incorporate all of these multi-media capabilities. So we're giving you a vision of maybe not what you're seeing at this year's SIGGRAPH, but certainly a SIGGRAPH or two from now, I can confidently predict that you're going to be seeing workstations that incorporate the kinds of capabilities that you'll hear discussed this afternoon. I should also emphasize that we are not here to give the kind of a presentation that you might expect from a group of folks --- from the Media Lab or from Xerox PARC who are going to tell you about some of the far-out kinds of things that they're working on. I emphasize again the technologies that are being described this afternoon are almost here and now, and will soon be available to you. Now let me make some comments about how in my particular environment I came to be interested in the concept of a multi-media workstation. I think each of us will probably have different stories to tell about why multi-media is important to the kinds of applications that we're involved with or envision. Title On global wire ordering for macro-cell routing Abstract Title Determining online retrieval system display size Abstract CCS Hardware Communication hardware, interfaces and storage Sensor applications and deployments CCS Hardware Communication hardware, interfaces and storage Sensor devices and platforms CCS Hardware Communication hardware, interfaces and storage Sound-based input / output Title The spoken web: software development and programming through voice Abstract It has been a constant aim of computer scientists, programming language designers and practitioners to raise the level of programming abstractions and bring them as close to the user's natural context as possible. The efforts started right from our transition from machine code programming to assembly language programming, from there to high level procedural languages, followed by object oriented programming. Nowadays, service oriented software development and composition are the norm. There have also been notable efforts such as Alice system from CMU to simplify the programming experience through the use of 3D virtual worlds. The holy grail has been to enable non-technical users such as kids or non-technical people to be able to understand and pick up programming and software development easily. We present a novel approach to software development that lets people use their voice to program or create new software through composition. We demonstrate some basic programming tasks achieved by simply talking to a system over an ordinary phone. Such programs constructed by talking can be created in user's local language and do not require IT literacy or even literacy as a prerequisite. We believe this approach will have a deep impact on software development, especially development of web based software in the very near future. Title Implementation of dictation system for Malayalam office document Abstract This paper describes the implementation of a dictation system for Malayalam office documents in OpenOffice Writer. Dictation system is built using state-of-the-art large vocabulary continuous speech recognition system for the Malayalam language. This system supports a vocabulary of 5000 most commonly used office domain words and is employed with a vocabulary updating facility to handle out-of-vocabulary words. The system is based on Hidden Markov Model (HMM), trained with huge (25 hours) amount of data. The training data is collected in room environment, ensuring the speaker variance and the phonetic richness. A hybrid model which integrates the rule based method with statistical method is used to handle the pronunciation variations for the creation of the pronunciation dictionary. The system is first of its kind which simplifies the tedious task of typing in Malayalam. Apart from dictating office documents with 75 ±5 % accuracy, the system is equipped with a facility of suggestion generation by which the user will be provided with alternate words for mis-recognized words. The system also supports some basic voice command operations for file operations like open, save, close etc. This system has an option to adapt to the user's voice which will improve the recognition accuracy by 2-5%. The system is successfully implemented in OpenOffice Writer and tested. Title PhonePeti: exploring the role of an answering machine system in a community radio station in India Abstract Community Radio (CR) stations are short range radio stations that serve the local media needs of their surrounding communities. Community participation by way of helping set the station agenda, airing of people's voices, and providing them with a local communication medium, is the defining feature of CR. But this philosophy has been hard to execute in practice because of logistical difficulties, with station staff not being able to reach out to a listenership-base spread across several hundreds of square kilometers. In today's context though, the high penetration of mobile phones has made it easier for listeners to participate in the running of radio stations, but the potential of telephony and radio integration has been exploited only minimally. In this paper, we explore the use of PhonePeti, an automated answering machine system in a community radio station based in Gurgaon, India. Answering machines are one of several ways to bring together the radio and telephony mediums. We show that this alone has the potential to considerably improve community engagement, but it also opens up many interesting issues on usability. Through quantitative and content analysis of 758 calls from 411 callers over two iterations of PhonePeti, combined with telephonic interviews of several callers, we show that significant challenges arise in being able to explain the concept of an answering machine to people who have not been exposed to a similar system in the past. We then show, through call statistics, that PhonePeti has increased community engagement by enabling more listeners to reach the station. Finally, we show that an answering machine system can be used to collect useful information from the callers. Title Power to the peers: authority of source effects for a voice-based agricultural information service in rural India Abstract Online communities enable people to easily connect and share knowledge across geographies. Mobile phones can enable billions of new users in emerging countries to participate in these online communities. In India, where social hierarchy is important, users may over-value institutionally-recognized authorities relative to peer-sourced content. We tested this hypothesis through a controlled experiment of source authority effects on a voice-based agricultural information service for farmers in Gujarat, India. 305 farmers were sent seven agricultural tips via automated phone calls over a two-week period. The same seven tips were each voice-recorded by two university scientists and two peer farmers. Participants received a preview of the tip from a randomly assigned source via the automated call, and played the remainder of the tip by calling a dedicated phone number. Participants called the follow-up number significantly more often when the tip preview was recorded by a peer than a scientist. On the other hand, in interviews conducted both before and after the experiment, a majority of farmers maintained that they preferred receiving information from scientists. This stated preference may have been expressing the more socially acceptable response. We interpret our experimental results as a demonstration of the demand for peer-based agricultural information dissemination. We conclude with design implications for peer-to-peer information services for rural communities in India. Title A voice service for user feedback on school meals Abstract Research using voice-based services as a technology platform for providing information access and services within developing world regions has shown much promise. The results for design and deployment of such voice-based services have varied depending on the application domain, user community and context. In this paper we describe our work on developing a voice-based service for obtaining feedback from school children, a previously unexplored user community. Through a user study, focus group discussions and observations of learners' interaction with multiple design prototype versions, we investigated several factors around input modality preference, language preference, performance and overall user experience. Whilst no significant differences were observed for performance across the prototypes, there were strong preferences for speech (input modality) and English (language). Focus group discussions revealed rich information on learner's perceptions around trust, confidentiality and general system usage. We highlight several design changes made and provide further recommendations on designing for this user community. Title Multi-party human-robot interaction with distant-talking speech recognition Abstract Speech is one of the most natural medium for human communication, which makes it vital to human-robot interaction. In real environments where robots are deployed, distant-talking speech recognition is difficult to realize due to the effects of reverberation. This leads to the degradation of speech recognition and understanding, and hinders a seamless human-robot interaction. To minimize this problem, traditional speech enhancement techniques optimized for human perception are adopted to achieve robustness in human-robot interaction. However, human and machine perceive speech differently: an improvement in speech recognition performance may not automatically translate to an improvement in human-robot interaction experience (as perceived by the users). In this paper, we propose a method in optimizing speech enhancement techniques specifically to improve automatic speech recognition (ASR) with emphasis on the human-robot interaction experience. Experimental results using real reverberant data in a multi-party conversation, show that the proposed method improved human-robot interaction experience in severe reverberant conditions compared to the traditional techniques. Title A social robot as an aloud reader: putting together recognition and synthesis of voice and gestures for HRI experimentation Abstract Advances in voice recognition have made possible applications in robotics controlled by voice only. However, user input through gestures and robot output gestures both create a more vivid interaction experience. In this article, we present an aloud reading application offering all these interaction methods for the HRI-research robot Maggie. It gives us a testbed for user studies investigating the effect of these additional interaction methods. Title Spatial language experiments for a robot fetch task Abstract This paper outlines a new study that investigates spatial language for use in human-robot communication. The scenario studied is a home setting in which the elderly resident has misplaced an object, such as eyeglasses, and the robot will help the resident find the object. We present results from phase I of the study in which we investigate spatial language generated to a human addressee or a robot addressee in a virtual environment. Title Characterizing user habituation in interactive voice interface: experience study on home network system Abstract In this paper, we try to empirically characterize user's habituation effect of the voice control in the Home Network System (HNS). We propose three kinds of metrics that capture the user's habituation quantitatively: (M1) the time of system speech, (M2) the number of support commands and (M3) the number of mistakes. The experimental results show that the metrics M1 and M2 are reasonable to capture the habituation of the user. Title TapBeats: accessible and mobile casual gaming Abstract Conventional video games today rely on visual cues to drive user interaction, and as a result, there are few games for blind and low-vision people. To address this gap, we created an accessible and mobile casual game for Android called TapBeats, a musical rhythm game based on audio cues. In addition, we developed a gesture system that utilizes text-to-speech and haptic feedback to allow blind and low-vision users to interact with the game's menu screens using a mobile phone touchscreen. A graphical user interface is also included to encourage sighted users to play as well. Through this game, we aimed to explore how both blind and sighted users can share a common game experience. CCS Hardware Communication hardware, interfaces and storage Tactile and hand-based interfaces CCS Hardware Communication hardware, interfaces and storage Scanners CCS Hardware Communication hardware, interfaces and storage Wireless devices Title A marine experiment of a long distance communication sensor network -MAD-SS- Abstract We conducted long distance radio propagation experiments at 1-10mW/145MHz, to realize a low-power long-distance communication for wildlife research and disaster prevention telemetry. In this paper, we describe that we succeeded in long distance communications, from a ferryboat to the top of Mt. Asugiyama (elevation: 501m, distance: 15km) in Kure, Hiroshima, Japan, in the verification test of our method using 3W radio power. We found out that our method has sufficient capability to achieve such a long distance communication at 10BPS/10mW in battery cell operation on a marine, if we use SSB mode and the SNR in SSB bandwidth is better than -10dB. Title Modulated backscatter for ultra-low power uplinks from wearable and implantable devices Abstract Wearable and implantable wireless biomedical devices are often constrained by the limited bandwidth and high power consumption of their communication links. The VHF or UHF transceivers (e.g. MICS radios) traditionally used for this communication function have relatively high power consumption, on the order of mW, due to the high bias currents required for the analog sections of the radio. To reduce overall power consumption, both the data rate and the duty cycle of the radio are usually minimized, because the lifetime of the device is limited by the energy density of available battery technologies. Recent innovations in modulated backscatter techniques offer the possibility of a radical reduction in the power cost and complexity of the data uplink, while significantly improving data rate. This is achieved by a re-partitioning of the communication link. Backscatter techniques shift the burden of power cost and complexity from the remote device to a base station. Instead of actively transmitting an RF signal, the remote device uplinks data to the base station by modulating its reflected field. We present two ultra-low power biotelemetry systems that leverage modulated backscatter in both the near-field and far-field propagation regimes. The first example operates in the far field and is designed to telemeter multiple channels of neural/EMG signals from dragonflies in flight. This device has a mass of 38 mg, a data rate of 5 Mbit/s, and a range of approximately 5 m. The second example operates in the near field and is designed to be implanted in mice. The sensor has a maximum implant depth of 6cm and can transmit at data rates of up to 30 Mbit/s. The power cost of the animal side of both data links is 4.9 pJ/bit and 16.4 pJ/bit respectively. Title Poster: a construction of a long distance communication sensor network node using Arduino and Mad-SS shield Abstract In this paper, we describe the construction of a long distance communication sensor network node for Arduino. We develop a Mad-SS Shield prototype system, which succeeds in about 5-km transmission with a 1 mW output. Title Speedy FPGA-based packet classifiers with low on-chip memory requirements Abstract This article pursues speedy packet classification with low on-chip memory requirements realized on Xilinx Virtext-6 FPGA. Based on hashing round-down prefixes specified in filter rules (dubbed HaRP), our implemented classifier is demonstrated to exhibit an extremely low on-chip memory requirement (lowering the byte count per rule by a factor of 8.6 in comparison with its most recent counterpart [2]), taking only 50% of Virtex-6 on-chip memory to store every large rule dataset (with some 30K rules) examined. In addition, it achieves a higher throughput than any known FPGA implementation, reaching more than 200 MPPS (millions packet lookups per second) with 8 processing units and 8 memory banks in the HaRP pipeline to support the line rate over 130 Gbps under bi-directional traffic in the worst case with 40-byte packets. By reducing memory probes per lookup, enhanced HaRP can further boost the classification speed to 255 MPPS. Title Cognitive wireless sensor networks for highway safety Abstract In this paper, we present our perspective on cognition in wireless sensor networks for highway safety applications. Cognition in the context of sensor networks deals with the ability to be aware of the environment and end-user requirements and proactively adapt to them, thus benefiting the network as a whole. An implementation of how cognition can be introduced into sensor networks in order to make it a smart one is illustrated through an experiment in this paper. Cognitive communication, cognitive components and how interaction among the various network elements in a sensor network can be improved to enhance network performance are the driving ideas behind this work. Title Mini-sink mobility with diversity-based routing in wireless sensor networks Abstract The performance of Wireless Sensor Networks (WSNs) is constrained by congestion, latency, data loss and other phenomena. In this paper, we propose a model based on the use of Mini-Sinks (MSs), each with considerable storage capacity, instead of a single sink for collecting the data. The idea is that one or more MS are mobile and move according to an arbitrary mobility model inside the sensor field to collect data within their coverage areas and forward it towards the sink. Energy Conserving Routing Protocol (ECRP), based on route diversity, is implemented in MSs and sensors in order to optimize the transmission cost of the forwarding scheme. A set of multiple paths between MSs and sensors is generated to reduce local congestion phenomena and to distribute the global traffic over the entire network. Since a topology incorporating MSs changes constantly due to their mobility, we analyze the impact of global connectivity for a given WSN. Simulations were performed in order to validate the performance of our model. We compare the results obtained with those for a single static sink, and show that our model gives better results in terms of lower congestion, energy consumption and broadcast latency. Title A 65 nm CMOS low power RF front-end for L1/E1 GPS/Galileo signals Abstract In this paper, we present a low-power RF front-end designed for L1/E1 GPS/Galileo, implemented on 65 nm CMOS technology. It draws 16mA on external voltage supply of 1.2V, with power consumption of less than 20mW. The chip could work also at 1.8V using a low dropout regulator embedded in the chip. The device integrates a high performance low noise amplifier, an AGC that don't need any external capacitor and a PLL loop filter reducing the external components count: only few passives for matching and external TCXO for frequency reference are needed. A programmable synthesizer manages most of the commonly used TCXO frequencies. Two default operative modes and related reference frequencies have been defined: 16.368MHz and 26MHz. The IF filter is fully embedded. It is a complex filter characterized from two operative modes: the first for GPS-only signal, the second for both GPS and GALILEO signals. Its characteristics can be adjusted through a proper switching cascade of adaptive first order cells. The data bit for base band are generated by a 3-bits ADC. The whole die area is 2.6mm2 Title Exploration of FPGA interconnect for the design of unconventional antennas Abstract The programmable interconnection resources are one aspect that distinguishes FPGAs from other devices. The abundance of these resources in modern devices almost always assures us that the most complex design can be routed. This underutilized resource can be used for other unintended purposes. One such use, explored here, is to concatenate large networks together to form pseudo-equipotential geometric shapes. These shapes can then be evaluated in terms of their ability to radiate (modulated) energy off the chip to a nearby receiver. In this paper, an unconventional method of building such transmitters on an FPGA is proposed. Arbitrary shaped antennas are created using a unique flow involving an experimental router and binary images. An experiment setup is used to measure the performance of the antennas created. Title Router with centralized buffer for network-on-chip Abstract Network-on-Chip (NoC) architectures are proposed as a possible solution to the wiring challenge. Both NoC performance and energy budget depend heavily on the routers' buffer resources. This paper introduces a centralized buffer structure, which dynamically allocates buffer resources according to network traffic conditions. This centralized buffer management scheme increases the buffer utilization and decreases the overall buffer use on an average of 50% in our case study analysis compared to a fixed buffer assignment strategy. The area overhead can be traded-off against the flexibility of on-demand buffer management. Title A demonstration of frequency hopping ad hoc and sensor network synchronization method on warp boards Abstract Despite of the fact that the time synchronization is one of the main issues in frequency hopping mobile ad hoc networks (FH MANET) or time synchronous wireless sensor networks (TWSN), it seems to be a rarely issued research subject. In the former case a network wide time reference is needed in FH code phase synchronization and in the latter case, e.g., in time stamping of the sensed phenomenon. Herein, a previously developed, simulated and published (by the authors) network time and FH code phase synchronization method that is a MANET compliant is demonstrated in a real-time environment on wireless open-access research platforms (WARP). The time or FH code phase synchronization process consists of both initial synchronization and synchronization maintaining phases. The proposed initial decision making method is master free and based on node identifiers (ID) and local information that a node collects from the surroundings. The synchronization is maintained by regularly transmitted (and received) synchronization messages. The demonstration cases are organized so that the frequency hopping characteristics of the waveform are shown together with the algorithm's ability to conduct an initial synchronization, the synchronization maintaining and late-entries to the existing network. Further, a payload data-streaming with the synchronized FH waveform is demonstrated. CCS Hardware Communication hardware, interfaces and storage Wireless integrated network sensors Title A marine experiment of a long distance communication sensor network -MAD-SS- Abstract We conducted long distance radio propagation experiments at 1-10mW/145MHz, to realize a low-power long-distance communication for wildlife research and disaster prevention telemetry. In this paper, we describe that we succeeded in long distance communications, from a ferryboat to the top of Mt. Asugiyama (elevation: 501m, distance: 15km) in Kure, Hiroshima, Japan, in the verification test of our method using 3W radio power. We found out that our method has sufficient capability to achieve such a long distance communication at 10BPS/10mW in battery cell operation on a marine, if we use SSB mode and the SNR in SSB bandwidth is better than -10dB. Title Poster: a construction of a long distance communication sensor network node using Arduino and Mad-SS shield Abstract In this paper, we describe the construction of a long distance communication sensor network node for Arduino. We develop a Mad-SS Shield prototype system, which succeeds in about 5-km transmission with a 1 mW output. Title Mini-sink mobility with diversity-based routing in wireless sensor networks Abstract The performance of Wireless Sensor Networks (WSNs) is constrained by congestion, latency, data loss and other phenomena. In this paper, we propose a model based on the use of Mini-Sinks (MSs), each with considerable storage capacity, instead of a single sink for collecting the data. The idea is that one or more MS are mobile and move according to an arbitrary mobility model inside the sensor field to collect data within their coverage areas and forward it towards the sink. Energy Conserving Routing Protocol (ECRP), based on route diversity, is implemented in MSs and sensors in order to optimize the transmission cost of the forwarding scheme. A set of multiple paths between MSs and sensors is generated to reduce local congestion phenomena and to distribute the global traffic over the entire network. Since a topology incorporating MSs changes constantly due to their mobility, we analyze the impact of global connectivity for a given WSN. Simulations were performed in order to validate the performance of our model. We compare the results obtained with those for a single static sink, and show that our model gives better results in terms of lower congestion, energy consumption and broadcast latency. Title Exploration of FPGA interconnect for the design of unconventional antennas Abstract The programmable interconnection resources are one aspect that distinguishes FPGAs from other devices. The abundance of these resources in modern devices almost always assures us that the most complex design can be routed. This underutilized resource can be used for other unintended purposes. One such use, explored here, is to concatenate large networks together to form pseudo-equipotential geometric shapes. These shapes can then be evaluated in terms of their ability to radiate (modulated) energy off the chip to a nearby receiver. In this paper, an unconventional method of building such transmitters on an FPGA is proposed. Arbitrary shaped antennas are created using a unique flow involving an experimental router and binary images. An experiment setup is used to measure the performance of the antennas created. Title Light amplifiers and solitons Abstract CCS Hardware Communication hardware, interfaces and storage Electro-mechanical devices CCS Hardware Integrated circuits 3D integrated circuits CCS Hardware Integrated circuits Interconnect CCS Hardware Integrated circuits Semiconductor memory CCS Hardware Integrated circuits Digital switches CCS Hardware Integrated circuits Logic circuits CCS Hardware Integrated circuits Reconfigurable logic and FPGAs CCS Hardware Very large scale integration design 3D integrated circuits CCS Hardware Very large scale integration design Analog and mixed-signal circuits CCS Hardware Very large scale integration design Application-specific VLSI designs CCS Hardware Very large scale integration design Design reuse and communication-based design CCS Hardware Very large scale integration design Design rules CCS Hardware Very large scale integration design Economics of chip design and manufacturing CCS Hardware Very large scale integration design Full-custom circuits CCS Hardware Very large scale integration design VLSI design manufacturing considerations CCS Hardware Very large scale integration design On-chip resource management CCS Hardware Very large scale integration design On-chip sensors CCS Hardware Very large scale integration design Standard cell libraries CCS Hardware Very large scale integration design VLSI packaging CCS Hardware Very large scale integration design VLSI system specification and constraints Title Evaluation of voltage stacking for near-threshold multicore computing Abstract This paper evaluates voltage stacking in the context of near-threshold multicore computing. Key attributes of voltage stacking are investigated using results from a test-chip prototype built in 150nm FDSOI CMOS. By "stacking" logic blocks on top of each other, voltage stacking reduces the chip current draw and simplifies off-chip power delivery but within-die voltage noise due to inter-layer current mismatch is an issue. Results show that unlike conventional power delivery schemes, supply rail impedance in voltage stacked systems depend on aggregate power consumption, leading to better noise immunity for high power (low impedance) operation for many-core processors. Title Cost-effective power delivery to support per-core voltage domains for power-constrained processors Abstract Per-core voltage domains can improve performance under a power constraint. Most commercial processors, however, only have one chip-wide voltage domain because splitting the voltage domain into per-core voltage domains and powering them with multiple off-chip voltage regulators (VRs) incurs a high cost for the platform and package designs. Although using on-chip switching VRs can be an alternative solution, integrating high-quality inductors and cores on the same chip has been a technical challenge. In this paper, we propose a cost-effective power delivery technique to support per-core voltage domains. Our technique is based on the observations that (i) core-to-core voltage variations are relatively small for most execution intervals when the voltages/frequencies are optimized to maximize performance under a power constraint and (ii) per-core power-gating devices augmented with small circuits can serve as low-cost VRs that can provide high efficiency in situations like (i). Our experimental results show that processors using our technique can achieve power efficiency as high as those using per-core on-chip switching VRs at much lower cost. Title SRAM leakage in CMOS, FinFET and CNTFET technologies: leakage in 8t and 6t sram cells Abstract An in-depth study of the static power consumption in 6T and 8T SRAM cell designs based on 32nm CMOS, FinFET and CNTFET technologies is presented. In addition to the inverter leakage currents, memory cells that are not active when write or read operations occur draw current from/to the bus drivers increasing the total standby power consumption. The FinFET schemes yield substantially lower write (1023.5 pA) and read (522.5 pA) leakage currents in 8T cells, which are 10.4% and 4.4% of the amount in CMOS 8T cells. A CNTFET 6T cell consumes 1.9% and 2.8% of the leakage current drawn by a CMOS 6T cell for write and read. Title Sustainable multi-core architecture with on-chip wireless links Abstract Current commercial systems on chip (SoC) designs integrate an increasingly large number of pre-designed cores and their number is predicted to increase significantly in the near future. Specifically, molecular-scale computing will allow single or even multiple order-of-magnitude improvements in device densities. In the design of high-performance massive multi-core chips, power and temperature have become dominant constraints. Increased power consumption can raise chip temperature,which in turn can decrease chip reliability and performance and increase cooling costs.The new, ensuing possibilities in terms of single chip integration call for new paradigms, architectures, and infrastructures for high bandwidth and low-power interconnects. In this paper we demonstrate how small-world Network-on-Chip (NoC) architectures with long-range wireless links enable design of energy and thermally efficient sustainable multi-core platforms. Title CMOS compatible many-core noc architectures with multi-channel millimeter-wave wireless links Abstract Traditional many-core designs based on the Network-on-Chip (NoC) paradigm suffer from high latency and power dissipation as the system size scales up due to their inherent multi-hop communication. NoC performance can be significantly enhanced by introducing long-range, low power, and high-bandwidth single-hop wireless links between far apart cores. This paper presents a design methodology and performance evaluation for a hierarchical small-world NoC with CMOS compatible on-chip millimeter (mm)-wave wireless long-range communication links. The proposed wireless NoC offers significantly higher bandwidth and lower energy dissipation compared to its conventional non-hierarchical wired counterpart in presence of both uniform and non-uniform traffic patterns. The performance improvement is achieved through efficient data routing and optimum placement of wireless hubs. Multiple wireless shortcuts operating simultaneously provide an energy efficient solution for design of many-core communication infrastructures. Title SpiNNaker: Design and Implementation of a GALS Multicore System-on-Chip Abstract The design and implementation of globally asynchronous locally synchronous systems-on-chip is a challenging activity. The large size and complexity of the systems require the use of computer-aided design (CAD) tools but, unfortunately, most tools do not work adequately with asynchronous circuits. This article describes the successful design and implementation of SpiNNaker, a GALS multicore system-on-chip. The process was completed using commercial CAD tools from synthesis to layout. A hierarchical methodology was devised to deal with the asynchronous sections of the system, encapsulating and validating timing assumptions at each level. The crossbar topology combined with a pipelined asynchronous fabric implementation allows the on-chip network to meet the stringent requirements of the system. The implementation methodology constrains the design in a way that allows the tools to complete their tasks successfully. A first test chip, with reduced resources and complexity was taped-out using the proposed methodology. Test chips were received in December 2009 and were fully functional. The methodology had to be modified to cope with the increased complexity of the SpiNNaker SoC. SpiNNaker chips were delivered in May 2011 and were also fully operational, and the interconnect requirements were met. Title Power efficiency as the #1 design constraint: technical perspective Abstract Title Understanding sources of ineffciency in general-purpose chips Abstract Scaling the performance of a power limited processor requires decreasing the energy expended per instruction executed, since energy/op * op/second is power. To better understand what improvement in processor efficiency is possible, and what must be done to capture it, we quantify the sources of the performance and energy overheads of a 720p HD H.264 encoder running on a general-purpose four-processor CMP system. The initial overheads are large: the CMP was 500 x less energy efficient than an Application Specific Integrated Circuit (ASIC) doing the same job. We explore methods to eliminate these overheads by transforming the CPU into a specialized system for H.264 encoding. Broadly applicable optimizations like single instruction, multiple data (SIMD) units improve CMP performance by 14 x and energy by 10x, which is still 50x worse than an ASIC. The problem is that the basic operation costs in H.264 are so small that even with a SIMD unit doing over 10 ops per cycle, 90% of the energy is still overhead. Achieving ASIC-like performance and effciency requires algorithm-specifc optimizations. For each subalgorithm of H.264, we create a large, specialized functional/storage unit capable of executing hundreds of operations per instruction. This improves energy effciency by 160x (instead of 10x), and the final customized CMP reaches the same performance and within 3x of an ASIC solution's energy in comparable area. Title From academic ideas to practical physical design tools Abstract In this paper, the author discusses how ideas from academic research are adapted into making physical design tools that are both successful and practical. Excluding recent developments, the author uses his past experiences to review the thinking process of selecting appropriate algorithms and the progressive optimization idea for creating effective design tools. The review mainly focuses on routability and timing optimization issues, though the ideas presented in the paper can be applied or extended to new tool development. NA Title VLSI design of analog multiplier based on NMOS technology Abstract In this paper, an all NMOS voltage-mode four quadrant analog multiplier based on a basic NMOS differential amplifier that can produce the output signal in voltage form without using resistors is presented. The proposed circuit has been fabricated and simulated with .35 micron technology. The power consumption is about 3.6mW from a ±2.5V power supply voltage, and the total harmonic distortion is 0.85% with a 1V input signal. CCS Hardware Power and energy Thermal issues CCS Hardware Power and energy Energy generation and storage CCS Hardware Power and energy Energy distribution CCS Hardware Power and energy Impact on the environment CCS Hardware Power and energy Power estimation and optimization CCS Hardware Electronic design automation High-level and register-transfer level synthesis CCS Hardware Electronic design automation Hardware description languages and compilation Title Towards an open sound card: bare-bones FPGA board in context of PC-based digital audio: based on the AudioArduino open sound card system Abstract The architecture of a sound card can, in simple terms, be described as an electronic board containing a digital bus interface hardware, and analog-to-digital (A/D) and digital-to-analog (D/A) converters; then, a soundcard driver software on a personal computer's (PC) operating system (OS) can control the operation of the A/D and D/A converters on board the soundcard, through a particular bus interface of the PC - acting as an intermediary for high-level audio software running in the PC's OS. This project provides open-source software for a do-it-yourself (DIY) prototype board based on a Field-Programmable Gate Array (FPGA), that interfaces to a PC through the USB bus - and demonstrates full-duplex, mono 8-bit/44.1 kHz soundcard operation. Thus, the inclusion of FPGA technology in this paper -- along with previous work with discrete part- and microcontroller- based designs -- completes an overview of architectures, currently available for DIY implementations of soundcards; serving as a broad introductory tutorial to practical digital audio. Title Virtualization of heterogeneous machines hardware description in a synthesizable object-oriented language Abstract Lime is a new Java-compatible and object-oriented language designed to make programming of reconflgurable hardware significantly more accessible to skilled software developers. Lime programs may run either in software (via Java bytecodes) or in hardware (via behavioral and logic synthesis). This paper illustrates the salient synthesis-oriented features of the language using a photo-mosaic algorithm with inherent bit, pipeline, and data parallelism. The result is a virtual machine abstraction that extends across a heterogeneous architecture comprising a CPU, FPGA, and other computational structures. Title Caisson: a hardware description language for secure information flow Abstract Information flow is an important security property that must be incorporated from the ground up, including at hardware design time, to provide a formal basis for a system's root of trust. We incorporate insights and techniques from designing information-flow secure programming languages to provide a new perspective on designing secure hardware. We describe a new hardware description language, Caisson, that combines domain-specific abstractions common to hardware design with insights from type-based techniques used in secure programming languages. The proper combination of these elements allows for an expressive, provably-secure HDL that operates at a familiar level of abstraction to the target audience of the language, hardware architects. We have implemented a compiler for Caisson that translates designs into Verilog and then synthesizes the designs using existing tools. As an example of Caisson's usefulness we have addressed an open problem in secure hardware by creating the first-ever provably information-flow secure processor with micro-architectural features including pipelining and cache. We synthesize the secure processor and empirically compare it in terms of chip area, power consumption, and clock frequency with both a standard (insecure) commercial processor and also a processor augmented at the gate level to dynamically track information flow. Our processor is competitive with the insecure processor and significantly better than dynamic tracking. Title ASystemC: an AOP extension for hardware description language Abstract Hardware-design requirements are becoming increasingly complex. Accordingly, the hardware developer is also beginning to use modern programming languages instead of traditional hardware description languages. However, modularity of the current hardware design has not changed from that of the traditional design. In this paper, we first conducted empirical investigation by interviews with real-world developers of circuit products, and confirmed that there exist cross-cutting concerns in actual products. The cross-cutting concerns fall into two types: one in common with software development and one specific to hardware design. In light of these results, this paper proposes ASystemC, an AOP extension for the hardware description language SystemC. ASystemC provides AOP features based on the AspectJ-like pointcut-advice mechanism. The design principle of ASystemC is practicality; we designed ASystemC to accept existing SystemC source code, and to weave aspects by using source-to-source conversion that outputs human-readable SystemC code. This design allows a user to utilize not only existing codes but also the existing knowledge and development process, as much as possible. As a result, ASystemC does not require modification of the existing source code review process and source analysis/manipulation tools, even if there is a developer unfamiliar with ASystemC in a development team. In addition, we confirmed the practicality and fiexibility of ASystemC through case studies: estimation of circuit size by using simulation, feature-configurable products and LTL verification. These cases are abstracted from actual problems in our products. They require not only code-level changes but also structural changes. Title Virtualization in the age of heterogeneous machines Abstract Since their invention over 40 years ago, virtual machines have been used to virtualize one or more von Neumann processors and their associated peripherals. System virtual machines provide the illusion that the user has their own instance of a physical machine with a given instruction set architecture (ISA). Process virtual machines provide the illusion of running on a synthetic architecture independent of the underlying ISA, generally for the purpose of supporting a high-level language. To continue the historical trend of exponential increase in computational power in the face of limits on clock frequency scaling, we must find ways to harness the inherent parallelism of billions of transistors. I contend that multi-core chips are a fatally flawed approach - instead, maximum performance will be achieved by using heterogeneous chips and systems that combine customized and customizable computational substrates that achieve very high performance by closely matching the computational and communications structures of the application at hand. Such chips might look like a mashup of a conventional multicore, a GPU, an FPGA, some ASICs, and a DSP. But programming them with current technologies would be nightmarishly complex, portability would be lost, and innovation between chip generations would be severely limited. The answer (of course) is virtualization, and at both the device level and the language level. In this talk I will illustrate some challenges and potential solutions in the context of IBM's Liquid Metal project, in which we are designing a new high-level language (Lime) and compiler/runtime technology to virtualize the underlying computational devices by providing a uniform semantic model. I will also discuss problems (and opportunities) that this raises at the operating system and data center levels, particularly with computational elements like FPGAs for which "context switching" is currently either extremely expensive or simply impossible. Title Architecture description and packing for logic blocks with hierarchy, modes and complex interconnect Abstract The development of future FPGA fabrics with more sophisticated and complex logic blocks requires a new CAD flow that permits the expression of that complexity and the ability to synthesize to it. In this paper, we present a new logic block description language that can depict complex intra-block interconnect, hierarchy and modes of operation. These features are necessary to support modern and future FPGA complex soft logic blocks, memory and hard blocks. The key part of the CAD flow associated with this complexity is the packer, which takes the logical atomic pieces of the complex blocks and groups them into whole physical entities. We present an area-driven generic packing tool that can pack the logical atoms into any heterogeneous FPGA described in the new language, including many different kinds of soft and hard logic blocks. We gauge its area quality by comparing the results achieved with a lower bound on the number of blocks required, and then illustrate its explorative capability in two ways: on fracturable LUT soft logic architectures, and on hard block memory architectures. The new infrastructure attaches to a flow that begins with a Verilog front-end, permitting the use of benchmarks that are significantly larger than the usual ones, and can target heterogenous FPGAs. Title Recent advances in ASM++ methodology for FPGA design Abstract This paper reports the latest advances achieved in ASM++ methodology since its presentation in Title Contribution to graphical representation of SystemC structural model simulation Abstract Nowadays SystemC plays an important role in digital system design. This C++ class library provides the necessary constructs to model system architecture including hardware timing, concurrency, and reactive behaviour missing in standard C++. The SystemC framework offers also simulation kernel to simulate SystemC Models but without a graphical user interface (GUI). This framework can be in some way integrated into an existing tool using then its GUI, as it was done by leading EDA (Electronic Design Automation) companies, but there is also the possibility to extend the framework itself. This approach was used and is presented in this paper. We propose the extensions of SystemC library to enable graphical representation of user's structural model and graphical presentation of simulation results in conjunction with the model schematic visualization. The resulting schematic view is clear, and easy to understand. Together with built in simulator it is a powerful tool for structure verification and debugging. Title Parallel controller design and synthesis Abstract Petri nets provide an adequate means to visualize both sequential and parallel controller behavior. They can be used to model and visualize behavior comprising concurrency and synchronization. Strongly time dependent complex controllers can be modeled using Petri nets by introducing several extensions to the basic formalism. The contribution of the work lies in a novel type of Petri net specification, suitable for control unit design. This Petri net is a kind of Synchronous Interpreted Petri net, extended by multi-layer hierarchy and time dependencies. Moreover, a method of the Petri net transformation into synthesizable VHDL code is proposed. The capabilities of the approach are shown by means of a small example illustrating the Petri net creation and its transformation into VHDL behavioral description. The VHDL code synthesizability is demonstrated by synthesis into Spartans 3E FPGA Family and CoolRunner XPLA3 CPLDs Family. Title What input-language is the best choice for high level synthesis (HLS)? Abstract As of 2010, over 30 of the world's top semiconductor / systems companies have adopted HLS. In 2009, SOCs tape-outs containing IPs developed using HLS exceeded 50 for the first time. Now that the practicality and value of HLS is established, engineers are turning to the question of "what input-language works best?" The answer is critical because it drives key decisions regarding the tool/methodology infrastructure companies will create around this new flow. ANSI-C/C++ advocates cite ease-of-learning, simulation speed. SystemC advocates make similar claims, and point to SystemC's hardware-oriented features. Proponents of BSV (Bluespec SystemVerilog) claim that language enhances architectural transparency and control. To maximize the benefits of HLS, companies must consider many factors and tradeoffs. CCS Hardware Electronic design automation Logic synthesis CCS Hardware Electronic design automation Modeling and parameter extraction Title Power Limitations and Dark Silicon Challenge the Future of Multicore Abstract Since 2004, processor designers have increased core counts to exploit Moore’s Law scaling, rather than focusing on single-core performance. The failure of Dennard scaling, to which the shift to multicore parts is partially a response, may soon limit multicore scaling just as single-core scaling has been curtailed. This paper models multicore scaling limits by combining device scaling, single-core scaling, and multicore scaling to measure the speedup potential for a set of parallel workloads for the next five technology generations. For device scaling, we use both the ITRS projections and a set of more conservative device scaling parameters. To model single-core scaling, we combine measurements from over 150 processors to derive Pareto-optimal frontiers for area/performance and power/performance. Finally, to model multicore scaling, we build a detailed performance model of upper-bound performance and lower-bound core power. The multicore designs we study include single-threaded CPU-like and massively threaded GPU-like multicore chip organizations with symmetric, asymmetric, dynamic, and composed topologies. The study shows that regardless of chip organization and topology, multicore scaling is power limited to a degree not widely appreciated by the computing community. Even at 22 nm (just one year from now), 21% of a fixed-size chip must be powered off, and at 8 nm, this number grows to more than 50%. Through 2024, only 7.9× average speedup is possible across commonly used parallel workloads for the topologies we study, leaving a nearly 24-fold gap from a target of doubled performance per generation. Title A hardware simulator for teaching CPU design Abstract This presentation describes a GUI-based tool for teaching CPU design to computer architecture students. A datapath builder allows microarchitecture building blocks, such as registers, ALUs, and multiplexors, to be laid out and wired together. A control builder allows a control state machine to be developed for the datapath. Because the processor design is simulated within a full PC emulator, student-designed processors can use emulated devices, such as drives, video, and I/O ports. A tutorial teaches students to use the simulator to build and simulate a pipelined RISC processor. Title Probabilistic modeling for job symbiosis scheduling on SMT processors Abstract Symbiotic job scheduling improves simultaneous multithreading (SMT) processor performance by coscheduling jobs that have “compatible” demands on the processor's shared resources. Existing approaches however require a sampling phase, evaluate a limited number of possible coschedules, use heuristics to gauge symbiosis, are rigid in their optimization target, and do not preserve system-level priorities/shares. This article proposes probabilistic job symbiosis modeling, which predicts whether jobs will create positive or negative symbiosis when coscheduled without requiring the coschedule to be evaluated. The model, which uses per-thread cycle stacks computed through a previously proposed cycle accounting architecture, is simple enough to be used in system software. Probabilistic job symbiosis modeling provides six key innovations over prior work in symbiotic job scheduling: (i) it does not require a sampling phase, (ii) it readjusts the job coschedule continuously, (iii) it evaluates a large number of possible coschedules at very low overhead, (iv) it is not driven by heuristics, (v) it can optimize a performance target of interest (e.g., system throughput or job turnaround time), and (vi) it preserves system-level priorities/shares. These innovations make symbiotic job scheduling both practical and effective. Our experimental evaluation, which assumes a realistic scenario in which jobs come and go, reports an average 16% (and up to 35%) reduction in job turnaround time compared to the previously proposed SOS (sample, optimize, symbios) approach for a two-thread SMT processor, and an average 19% (and up to 45%) reduction in job turnaround time for a four-thread SMT processor. Title Improving dynamic prediction accuracy through multi-level phase analysis Abstract Phase analysis, which classifies the set of execution intervals with similar execution behavior and resource requirements, has been widely used in a variety of dynamic systems, including dynamic cache reconfiguration, prefetching and race detection. While phase granularity has been a major factor to the accuracy of phase prediction, it has not been well investigated yet and most dynamic systems usually adopt a fine-grained prediction scheme. However, such a scheme can only take account of recent local phase information and could be frequently interfered by temporary noises due to instant phase changes, which might notably limit the prediction accuracy. In this paper, we make the first investigation on the potential of multi-level phase analysis (MLPA), where different granularity phase analysis are combined together to improve the overall accuracy. The key observation is that a coarse-grained interval, which usually consists of To demonstrate the effectiveness of MLPA, we apply it to a dynamic cache reconfiguration system which dynamically adjusts the cache size to reduce the power consumption and access time of data cache. Experimental results show that MLPA can further reduce the average cache size by 15% compared to the fine-grained scheme. Title An efficient CPI stack counter architecture for superscalar processors Abstract Cycles-Per-Instruction (CPI) stacks provide intuitive and insightful performance information to software developers. Performance bottlenecks are easily identified from CPI stacks, which hint towards software changes for improving performance. Computing CPI stacks on contemporary superscalar processors is non-trivial though because of various overlap effects. Prior work proposed a CPI counter architecture for computing CPI stacks on out-of-order processors. The accuracy of the obtained CPI stacks was evaluated previously, however, the hardware overhead analysis was not based on a detailed hardware implementation. In this paper, we implement the previously proposed CPI counter architecture in hardware and we find that the previous design can be further optimized. We propose a novel hardware- and power-efficient CPI counter architecture that reduces chip area by 44% and power consumption by 47% over the best possible prior design, while maintaining nearly the same level of performance and accuracy. Title Workload generation for microprocessor performance evaluation: SPEC PhD award (invited abstract) Abstract This PhD thesis [1], awarded with the SPEC Distinguished Dissertation Award 2011, proposes and studies workload generation and reduction techniques for microprocessor performance eveluation. (1) The thesis proposes code mutation, a novel methodology for hiding proprietary information from computer programs while maintaining representative behavior; code mutation enables dissemination of proprietary applications as benchmarks to third parties in both academia and industry. (2) It contributes to sampled simulation by proposing NSL-BLRL, a novel warm-up technique that reduces simulation time by an order of magnitude over state-of-the-art. (3) It presents a benchmark synthesis framework for generating synthetic benchmarks from a set of desired program statistics. The benchmarks are generated in a high-level programming language, which enables both compiler and hardware exploration. Title Studying hardware and software trade-offs for a real-life web 2.0 workload Abstract Designing data centers for Web 2.0 social networking applications is a major challenge because of the large number of users, the large scale of the data centers, the distributed application base, and the cost sensitivity of a data center facility. Optimizing the data center for performance per dollar is far from trivial. In this paper, we present a case study characterizing and evaluating hardware/software design choices for a real-life Web 2.0 workload. We sample the Web 2.0 workload both in space and in time to obtain a reduced workload that can be replayed, driven by input data captured from a real data center. The reduced workload captures the important services (and their interactions) and allows for evaluating how hardware choices affect end-user experience (as measured by response times). We consider the Netlog workload, a popular and commercially deployed social networking site with a large user base, and we explore hardware trade-offs in terms of core count, clock frequency, traditional hard disks versus solid-state disks, etc., for the different servers, and we obtain several interesting insights. Further, we present two use cases illustrating how our characterization method can be used for guiding hardware purchasing decisions as well as software optimizations. Title An Extended SystemC Framework for Efficient HW/SW Co-Simulation Abstract In this article, we propose an extended SystemC framework that directly enables software simulation in SystemC. Although SystemC has been widely adopted for system-level simulation of hardware designs nowadays, to complete HW/SW co-simulation, it still requires an additional instruction set simulator (ISS) for software execution. However, the heavy intercommunication overheads between the two heterogeneous simulators would significantly slow down simulation performance. To deal with this issue, our proposed approach automatically generates high-speed and equivalent SystemC models for target software applications that can be directly integrated with hardware models for complete HW/SW co-simulation. In addition, to properly handle multitasking, an efficient OS model is devised to support accurate preemptive scheduling. Since both the generated application model and the OS model are constructed in SystemC modules, our approach avoids heavy intercommunication overheads and achieves over 1,000 times faster simulation than that of the conventional ISS-SystemC approach. Experimental results demonstrate that our extended SystemC approach can perform at 50 to 220 MIPS while offering accurate simulation results. Title VSim: Simulating multi-server setups at near native hardware speed Abstract Simulating contemporary computer systems is a challenging endeavor, especially when it comes to simulating high-end setups involving multiple servers. The simulation environment needs to run complete software stacks, including operating systems, middleware, and application software, and it needs to simulate network and disk activity next to CPU performance. In addition, it needs the ability to scale out to a large number of server nodes while attaining good accuracy and reasonable simulation speeds. This paper presents VSim, a novel simulation methodology for multi-server systems. VSim leverages virtualization technology for simulating a target system on a host system. VSim controls CPU, network and disk performance on the host, and it gives the illusion to the software stack to run on a target system through time dilation. VSim can simulate multiple targets per host, and it employs a distributed simulation scheme across multiple hosts for simulations at scale. Our experimental results demonstrate VSim's accuracy: typical errors are below 6% for CPU, disk, and network performance. Real-life workloads involving the Lucene search engine and the Olio Web 2.0 benchmark illustrate VSim's utility and accuracy (average error of 3.2%). Our current setup can simulate up to five target servers per host, and we provide a Hadoop workload case study in which we simulate 25 servers. These simulation results are obtained at a simulation slowdown of one order of magnitude compared to native hardware execution. Title SESAM/Par4All: a tool for joint exploration of MPSoC architectures and dynamic dataflow code generation Abstract Due to the increasing complexity of new multiprocessor systems on chip, flexible and accurate simulators become a necessity for exploring the vast design space solution. In a streaming execution model, only a well-balanced pipeline can lead to an efficient implementation. However with dynamic applications, each stage is prone to execution time variations. Only a joint exploration of the application space of parallelization possibilities, together with the possible MPSoC architectural choices, can lead to an efficient embedded system. In this paper, we associate a semi-automatic parallelization workflow based on the Par4All retargetable compiler, to the SESAM environment. This new framework can ease the application exploration and find the best tradeoffs between complexity and performance for asymmetric homogeneous MPSoCs and dynamic streaming application processing. A use case is performed with a radio sensing application implemented on a complete MPSoC platform. CCS Hardware Electronic design automation Physical design (EDA) CCS Hardware Electronic design automation Timing analysis CCS Hardware Electronic design automation Methodologies for EDA CCS Hardware Hardware validation Functional verification CCS Hardware Hardware validation Physical verification CCS Hardware Hardware validation Post-manufacture validation and debug CCS Hardware Hardware test Analog, mixed-signal and radio frequency test CCS Hardware Hardware test Board- and system-level test CCS Hardware Hardware test Defect-based test CCS Hardware Hardware test Design for testability CCS Hardware Hardware test Fault models and test metrics CCS Hardware Hardware test Memory test and repair Title ER: elastic RESET for low power and long endurance MLC based phase change memory Abstract Phase Change Memory (PCM) has recently emerged as a promising nonvolatile memory technology. To effectively increase memory capacity and reduce per bit fabrication cost, multi-level cell (MLC) PCM stores more than one bit per cell by differentiating multiple intermediate resistance levels. However, MLC PCM suffers from significantly shortened endurance due to its large RESET current that initiates the cell state. In this paper, we propose elastic RESET (ER) to construct non-2 Title Mitigating the effects of large multiple cell upsets (MCUs) in memories Abstract Reliability is a critical issue for memories. Radiation particles that hit the device can cause errors in some cells, which can lead to data corruption. To avoid this problem, memories are protected with per-word error correction codes (ECCs). Typically, single-error correction and double-error detection (SEC-DED) codes are used. As technology scales, errors caused by radiation particles on memories tend to affect more than one cell—what is known as a multiple cell upset (MCU). To ensure that only a single cell is affected in each word, interleaving is used. With interleaving, cells that belong to the same word are placed at a sufficient distance such that an MCU will only affect a single cell on each word. The use of interleaving significantly increases the cost of the device. Also, determining the interleaving distance (ID) required to avoid MCUs causing double errors is not trivial. Typically, accelerated radiation experiments with a limited number of particle hits are used. They provide a lower bound on the required ID, but larger MCUs may occur with a low probability. But even if the percentage of such large MCUs is very low, the impact on reliability can be significant. This article presents a technique to mitigate the effects of large MCUs that is, those that exceed the ID, on memory reliability. The proposed approach is able to correct most double errors caused by large MCUs by exploiting the locality of the errors within an MCU. Title FFT-cache: a flexible fault-tolerant cache architecture for ultra low voltage operation Abstract Caches are known to consume a large part of total microprocessor power. Traditionally, voltage scaling has been used to reduce both dynamic and leakage power in caches. However, aggressive voltage reduction causes process-variation-induced failures in cache SRAM arrays, which compromise cache reliability. In this paper, we propose Flexible Fault-Tolerant Cache (FFT-Cache) that uses a flexible defect map to configure its architecture to achieve significant reduction in energy consumption through aggressive voltage scaling, while maintaining high error reliability. FFT-Cache uses a portion of faulty cache blocks as redundancy -- using block-level or line-level replication within or between sets to tolerate other faulty caches lines and blocks. Our configuration algorithm categorizes the cache lines based on degree of conflict of their blocks to reduce the granularity of redundancy replacement. FFT-Cache thereby sacrifices a minimal number of cache lines to avoid impacting performance while tolerating the maximum amount of defects. Our experimental results on SPEC2K benchmarks demonstrate that the operational voltage can be reduced down to 375mV, which achieves up to 80% reduction in dynamic power and up to 48% reduction in leakage power with small performance impact and area overhead. Title Energy-efficient cache design using variable-strength error-correcting codes Abstract Voltage scaling is one of the most effective mechanisms to improve microprocessors' energy efficiency. However, processors cannot operate reliably below a minimum voltage, Vccmin, since hardware structures may fail. Cell failures in large memory arrays (e.g., caches) typically determine Vccmin for the whole processor. We observe that most cache lines exhibit zero or one failures at low voltages. However, a few lines, especially in large caches, exhibit multi-bit failures and increase Vccmin. Previous solutions either significantly reduce cache capacity to enable uniform error correction across all lines, or significantly increase latency and bandwidth overheads when amortizing the cost of error-correcting codes (ECC) over large lines. In this paper, we propose a novel cache architecture that uses variable-strength error-correcting codes (VS-ECC). In the common case, lines with zero or one failures use a simple and fast ECC. A small number of lines with multi-bit failures use a strong multi-bit ECC that requires some additional area and latency. We present a novel dynamic cache characterization mechanism to determine which lines will exhibit multi-bit failures. In particular, we use multi-bit correction to protect a fraction of the cache after switching to low voltage, while dynamically testing the remaining lines for multi-bit failures. Compared to prior multi-bit-correcting proposals, VS-ECC significantly reduces power and energy, avoids significant reductions in cache capacity, incurs little area overhead, and avoids large increases in latency and bandwidth. Title Impact of positive bias temperature instability (PBTI) on 3T1D-DRAM cells Abstract Memory circuits are playing a key role in complex multicore systems with both data and instructions storage and mailbox communication functions. There is a general concern that conventional SRAM cell based on the 6T structure could exhibit serious limitations in future CMOS technologies due to the instability caused by transistor mismatching as well as for leakage consumption reasons. For L1 data caches the new cell 3T1D DRAM is considered a potential candidate to substitute 6T SRAMs. We first evaluate the impact of the positive bias temperature instability, PBTI, on the access and retention time of the 3T1D memory cell implemented with 45 nm technology. Then, we consider all sources of variations and the effect of the degradation caused by the aging of the device on the yield at system level. Title Influence of metallic tubes on the reliability of CNTFET SRAMs: error mechanisms and countermeasures Abstract Carbon nanotubes (CNTs) are considered as a possible successor to the CMOS technology. The adoption of these nanodevices for designing large VLSI systems, however, is limited by the unreliable manufacturing process. In this paper, we investigate the possibility of using CNTFETs to build SRAM arrays. We analyze the error mechanisms and show how stuck-at faults and pattern sensitive faults are caused by metallic tubes in different transistors of a 6-T SRAM cell. The results indicate the need of stronger error detecting codes than the widely used single-error-correcting, double-error-detecting codes in CMOS SRAMs. Title Stochastic non sequitur behavior analysis of fault tolerant hybrid systems Abstract In this paper, we introduce a new stochastic analysis method for the uncontrollable behaviour of hybrid systems triggered by faults. Models like manoeuvre automata can be used for controllable transitions between stable modes. However, the normal behaviour can become easily unpredictable when a fault occurs, and a human or an automated supervisor needs to take very quickly the right actions to drive the system back into a controllable state. The system behaviour while transiting two controllable states is called non sequitur. Using stochastic analysis we investigate how to extract information about non sequitur that can be used in stochastic control. Title Flikker: saving DRAM refresh-power through critical data partitioning Abstract Energy has become a first-class design constraint in computer systems. Memory is a significant contributor to total system power. This paper introduces Flikker, an application-level technique to reduce refresh power in DRAM memories. Flikker enables developers to specify critical and non-critical data in programs and the runtime system allocates this data in separate parts of memory. The portion of memory containing critical data is refreshed at the regular refresh-rate, while the portion containing non-critical data is refreshed at substantially lower rates. This partitioning saves energy at the cost of a modest increase in data corruption in the non-critical data. Flikker thus exposes and leverages an interesting trade-off between energy consumption and hardware correctness. We show that many applications are naturally tolerant to errors in the non-critical data, and in the vast majority of cases, the errors have little or no impact on the application's final outcome. We also find that Flikker can save between 20-25% of the power consumed by the memory sub-system in a mobile device, with negligible impact on application performance. Flikker is implemented almost entirely in software, and requires only modest changes to the hardware. Title Partitioning techniques for partially protected caches in resource-constrained embedded systems Abstract Increasing exponentially with technology scaling, the soft error rate even in earth-bound embedded systems manufactured in deep subnanometer technology is projected to become a serious design consideration. Partially protected cache (PPC) is a promising microarchitectural feature to mitigate failures due to soft errors in power, performance, and cost sensitive embedded processors. A processor with PPC maintains two caches, one protected and the other unprotected, both at the same level of memory hierarchy. The intuition behind PPCs is that not all data in the application is equally prone to soft errors. By finding and mapping the data that is more prone to soft errors to the protected cache, and error-resilient data to the unprotected cache, failures induced by soft errors can be significantly reduced at a minimal power and performance penalty. Consequently, the effectiveness of PPCs critically hinges on the compiler's ability to partition application data into error-prone and error-resilient data. The effectiveness of PPCs has previously been demonstrated on multimedia applications—where an obvious partitioning of data exists, the multimedia data is inherently resilient to soft errors, and the rest of the data and the entire code is assumed to be error-prone. Since the amount of multimedia data is a quite significant component of the entire application data, this obvious partitioning is quite effective. However, no such obvious data and code partitioning exists for general applications. This severely restricts the applicability of PPCs to data caches and instruction caches in general. This article investigates vulnerability-based partitioning schemes that are applicable to applications in general and effectively reduce failures due to soft errors at minimal power and performance overheads. Our experimental results on an HP iPAQ-like processor enhanced with PPC architecture, running benchmarks from the MiBench suite demonstrate that our partitioning heuristic efficiently finds page partitions for data PPCs that can reduce the failure rate by 48% at only 2% performance and 7% energy overhead, and finds page partitions for instruction PPCs that reduce the failure rate by 50% at only 2% performance and 8% energy overhead, on average. Title Analysis of thermal behaviors of spin-torque-transfer RAM: a simulation study Abstract We present an accurate model of the self-heating effect in the Spin-Torque-Transfer RAM (STTRAM) using finite-volume-methods and thermal RC based compact models. We couple device level thermal simulation to the self-heating phenomenon to show that self-heating during write operation can result in significant temperature increase in STTRAM which in turn adversely affect the read disturb, leakage energy and sensing accuracy. CCS Hardware Hardware test Hardware reliability screening CCS Hardware Hardware test Test-pattern generation and fault simulation Title High-performance low-energy STT MRAM based on balanced write scheme Abstract It is well known that high write time/energy in STT MRAM are aggravated by the asymmetry in write currents for '0'→'1' and '1'→'0' transitions. This asymmetry is primarily due to the source degeneration of the access transistor during write. In this work, we propose a design methodology which avoids the source degeneration of the access transistor, leading to balanced switching times for '0'→'1' and '1'→'0' transitions. This is achieved by using an additional (negative) bit-line voltage and reduced word-line voltage. The proposed method reduces write time (by ~40%) and write energy (by 65%), enhances reliability of MTJ, and significantly improves tolerance to process variation. In the proposed scheme, source-line can be directly connected to ground signal leading to a compact cell layout. Title Automatic RTL Test Generation from SystemC TLM Specifications Abstract SystemC transaction-level modeling (TLM) is widely used to enable early exploration for both hardware and software designs. It can reduce the overall design and validation effort of complex system-on-chip (SOC) architectures. However, due to lack of automated techniques coupled with limited reuse of validation efforts between abstraction levels, SOC validation is becoming a major bottleneck. This article presents a novel top-down methodology for automatically generating register transfer-level (RTL) tests from SystemC TLM specifications. It makes two important contributions: i) it proposes a method that can automatically generate TLM tests using various coverage metrics, and (ii) it develops a test refinement specification for automatically converting TLM tests to RTL tests in order to reduce overall validation effort. We have developed a tool which incorporates these activities to enable automated RTL test generation from SystemC TLM specifications. Case studies using a router example and a 64-bit Alpha AXP pipelined processor demonstrate that our approach can achieve intended functional coverage of the RTL designs, as well as capture various functional errors and inconsistencies between specifications and implementations. Title A memory accelerator with gather functions for bandwidth-bound irregular applications Abstract Compute intensive processing can be easily accelerated using processors with many cores such as GPUs. However, memory bandwidth limitation becomes serious year by year for memory bandwidth intensive applications such as sparse matrix vector multiplications (SpMV). In order to accelerate memory bandwidth intensive applications, we have proposed a memory system with additional functions of scattering and gathering. For the preliminary evaluation of our proposed system, we assumed that the throughput of the memory system was sufficient. In this paper, we propose a memory system with scattering and gathering using many narrow memory channels. We evaluate the feasible throughput of the proposed memory system based on DDR3 DRAM with the modified DRAMsim2 simulator. In addition, we evaluate the performance of SpMV using our method for the proposed memory system and a GPU. We have confirmed the proposed memory system has good performance and good stability for matrix shape variation using fewer pins for external memory. Title The dark side of DEMMON: what is behind the scene in engineering large-scale wireless sensor networks Abstract Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date. Title ExLRU: a unified write buffer cache management for flash memory Abstract NAND flash memory has been widely adopted in embedded systems as secondary storage. Yet the further development of flash memory strongly hinges on the tackling of its inherent implausible characteristics, including read and write speed asymmetry, inability of in-place update, and performance harmful erase operations. While Write Buffer Cache (WBC) has been proposed to enhance the performance of write operations, the development of a unified WBC management scheme that is effective for diverse types of access patterns is still a challenging task. In this paper, a novel WBC management scheme named Expectation-based LRU (ExLRU) is proposed to improve the performance of write operations while at the same time reducing the number of erase operations on flash memory. ExLRU accurately maintains access history information in WBC, based on which a new cost model is constructed to select the data with minimum write cost to be written to flash memory. An efficient ExLRU implementation with negligible hardware overhead is further developed. Simulation results show that ExLRU outperforms state-of-art WBC management schemes under various workloads. Title Performance impact and interplay of SSD parallelism through advanced commands, allocation strategy and data granularity Abstract With the development of the NAND-Flash technology, NAND-Flash based Solid-State Disk (SSD) has been attracting a great deal of attention from both industry and academia. While a range of SSD research topics, from interface techniques to buffer management and Flash Translation Layer (FTL), from performance to endurance and energy efficiency, have been extensively studied in the literature, the SSD being studied was by and large treated as a grey or black box in that many of the internal features such as advanced commands, physical-page allocation schemes and data granularity are hidden or assumed away. We argue that, based on our experimental study, it is these internal features and their interplay that will help provide the missing but significant insights to designing high-performance and high-endurance SSDs. In this paper, we use our highly accurate and multi-tiered SSD simulator, called SSDsim, to analyze several key internal SSD factors to characterize their performance impacts, interplay and parallelisms for the purpose of performance and endurance en-hancement of SSDs. From the results of our experiments, we found that: (1) larger pages tend to have significantly negative impact on SSD performance under many workloads; (2) different physical-page allocation schemes have different deployment en-vironments, where an optimal allocation scheme can be found for each workload; (3) although advanced commands provided by flash manufacturers can improve performance in some cases, they may jeopardize the SSD performance and endurance when used inappropriately; (4) since the parallelisms of SSD can be classified into four levels, namely, channel-level, chip-level, die-level and plane-level, the priority order of SSD parallelism, resulting from the strong interplay among physical-page allocation schemes and advanced commands, can have a very significant impact on SSD performance and endurance. Title A countermeasure against power analysis attacks for FSR-based stream ciphers Abstract In this paper we analyze the power characteristics of Feedback Shift Registers (FSRs) and their e ect on FSR-based stream ciphers. We introduce a technique to isolate the switching activity of a stream cipher by equalizing the current drawn from the cipher with lower power overhead compared to previously introduced countermeasures. By re-implementing the Grain-80 and the Grain-128 ciphers with the presented approach, we lower their power consumption respectively by 20% and 25% compared to previously proposed countermeasures. Title Advanced faults patterns for WSN dependability benchmarking Abstract Wireless sensor networks are typically deployed in uncontrolled environments that have high impact on the individual nodes' reliability and ability to sustain effective service for a particular application, not to mention the failure probability that increases with the size of the network itself. The high cost induced by the deployment of WSN at large scale largely prevents the use of hardware and software based fault injection, leaving simulation-based tool the only remaining option. To date, most of simulation tools for WSN do not provide extensive modules for dependability benchmarking, which leaves the protocol designers to use either external fault-injection tools of modifying the code of the application to simulate faults. Those two factors makes using realistic fault patterns difficult and might impact the real-time behavior of the applications. In this paper, we propose a new model for describing advanced fault patterns, that subsumes previously used models for characterizing faulty behaviors. We implement the model in the WSNet simulator as an intermediate layer that is distinct from any layer in the protocol stack. Our modified WSNet is then used for extensive dependability benchmarking using typical WSN application, matching both real-life fault patterns and specific attacks that evolve over time. Title Customizing pattern set for test power reduction via improved X-identification and reordering Abstract In this paper we present a method to identify don't care locations in a fully specified set of vectors, considering both fault propagation path and fault activation path. We exploit the identified X bits to convert the original vector to low power vector by dictionary based approach to minimize both dynamic and runtime leakage power. The dynamic power as well as the runtime leakage power depends on the activity in the circuit and hence depends on the sequence in which the test vectors are fed to it. We present an approach based on Particle Swarm Optimization (PSO) for vector reordering. Experiments on ISCAS89 benchmark circuits validate the effectiveness of our work. We achieve a maximum of 86.63% at an average of 60.89% reduction in dynamic power, a maximum of 6.87% at an average of 5.28% savings in terms of leakage power and a maximum of 66.55% at an average of 50.11% savings in terms of total power with respect to the original compacted test set generated by Tetramax ATPG tool. Title Pattern grading for testing critical paths considering power supply noise and crosstalk using a layout-aware quality metric Abstract Power supply noise and crosstalk are considered as the two major noise sources that negatively impact signal integrity in digital integrated circuits. In this paper, we propose a novel quality metric to evaluate path-delay fault test patterns in terms of their ability to cause excess delay on targeted critical paths. The proposed procedure quickly selects the best set of patterns for testing the critical paths under power supply noise and crosstalk effects. It also offers the design engineers a quick approach to evaluate the critical paths in static timing analysis (STA) and silicon to improve timing margin strategies. Simulation results demonstrate that the patterns selected by the proposed methodology generate the worst-case supply noise and crosstalk effects on target paths. CCS Hardware Hardware test Testing with distributed and parallel systems CCS Hardware Robustness Fault tolerance CCS Hardware Robustness Design for manufacturability CCS Hardware Robustness Hardware reliability CCS Hardware Robustness Safety critical systems CCS Hardware Emerging technologies Analysis and design of emerging devices and systems CCS Hardware Emerging technologies Biology-related information processing CCS Hardware Emerging technologies Circuit substrates CCS Hardware Emerging technologies Electromechanical systems CCS Hardware Emerging technologies Emerging interfaces CCS Hardware Emerging technologies Memory and dense storage CCS Hardware Emerging technologies Emerging optical and photonic technologies CCS Hardware Emerging technologies Reversible logic CCS Hardware Emerging technologies Plasmonics CCS Hardware Emerging technologies Quantum technologies CCS Hardware Emerging technologies Spintronics and magnetic technologies CCS Computer systems organization Architectures Serial architectures CCS Computer systems organization Architectures Parallel architectures CCS Computer systems organization Architectures Distributed architectures CCS Computer systems organization Architectures Other architectures CCS Computer systems organization Embedded and cyber-physical systems Sensor networks CCS Computer systems organization Embedded and cyber-physical systems Robotics CCS Computer systems organization Embedded and cyber-physical systems Sensors and actuators Title Robotic swarm cooperation by co-adaptation Abstract This paper presents a framework for co-adapting mobile sensors in hostile environments to allow telepresence of a distant user. The presented technique relies on cooperative co-evolution for sensor placement. It is shown that cooperative co-evolution is able to find simultaneously the required number of sensors to observe a given environment and a configuration that is consistently better than other well know optimization algorithms. Moreover, it is presented that co-evolution is also able to quickly reach a new configuration when the environment changes. NA Title Co-adapting mobile sensor networks to maximize coverage in dynamic environments Abstract With recent advances in mobile computing, swarm robotics has demonstrated its utility in countless situations like recognition, surveillance, and search and rescue. This paper presents a novel approach to optimize the position of a swarm of robots to accomplish sensing tasks based on cooperative co-evolution. Results show that the introduced cooperative method simultaneously finds the right number of sensors while also optimizing their positions in static and dynamic environments. Title Advances in tactile sensing and touch based human-robot interaction Abstract The problem of "providing robots with the sense of touch" is fundamental in order to develop the next generations of robots capable of interacting with humans in different contexts: in daily housekeeping activities, as working partners or as caregivers, just to name a few. In a low-level perspective, through tactile sensing it is possible to measure or estimate physical properties of manipulated or touched objects, whereas feedback from tactile sensors may enable the detection and safe control of the interaction between the robot and objects or humans. In a high-level perspective, touch-based cognitive processes can be entailed by developing robot body self-awareness capabilities and by differentiating the "self" from the "external space", thereby opening new relevant problems in Robotics. The objective of this Workshop is to present and discuss the most recent achievements in the area of tactile sensing starting from the technological aspects, up to the application problems where tactile feedback plays a fundamental role. The Workshop will cover, but will not be limited, to the following three areas: Title Ekho: bridging the gap between simulation and reality in tiny energy-harvesting sensors Abstract Harvested energy makes long-term maintenance-free sensor deployments possible; however, as devices shrink in order to accommodate new applications, tightening energy budgets and increasing power supply volatility leaves system designers poorly equipped to predict how their devices will behave when deployed. This paper describes the design and initial FPGA-based implementation of Ekho, a tool that records and emulates energy harvesting conditions, in order to support realistic and repeatable testing and experimentation. Ekho uses the abstraction of I-V curves---curves that describe harvesting current with respect to supply voltage---to accurately represent harvesting conditions, and supports a range of harvesting technologies. An early prototype emulates I-V curves with 0.1mA accuracy, and responds in 4.4μ Title Knowledge discovery from sensor data (SensorKDD) Abstract Sensor data is being collected at an unprecedented rate across a variety of domains from a broad spectrum of sources, such as wide-area sensor infrastructures, remote sensing instruments, RFIDs, and wireless sensor networks. With the recent proliferation of smart-phones, and similar GPS enabled mobile devices, collection of sensor data is no longer limited to scientific communities, but has reached general public. With massive volumes of such disparate, dynamic, and geographically distributed data available, many high-priority applications have been identified that involve analysis of such data to solve real world problems such as understanding climate change and its impacts, electric grid monitoring, disaster preparedness and management, national or homeland security, and the management of critical infrastructures. Given the unique characteristics of sensor data, particularly its spatiotemporal nature and presence of constraints associated with the data collection and computational resources, there have been many research efforts to analyze the sensor data which build upon the general research in the data mining community but are significantly different in terms of how they address the specific challenges encountered when dealing with sensor data. In particular, the raw data from sensors needs to be efficiently managed and transformed to usable information through data fusion, which in turn must be converted to predictive insights via knowledge discovery, ultimately facilitating automated or humaninduced tactical decisions or strategic policy based on decision sciences and decision support systems. Keeping in view the requirements of the emerging field of knowledge discovery from sensor data, we took initiative to develop a community of researchers with common interests and scientific goals, which culminated into the organization of SensorKDD series of workshops in conjunction with the prestigious ACM SIGKDD International Conference of Knowledge Discovery and Data Mining. In this report, we summarize events at the Fourth ACM-SIGKDD International Workshop on Knowledge Discovery form Sensor Data (SensorKDD 2010). Title Integration of a low-cost RGB-D sensor in a social robot for gesture recognition Abstract An objective of natural Human-Robot Interaction (HRI) is to enable humans to communicate with robots in the same manner humans do between themselves. This includes the use of natural gestures to support and expand the information that is exchanged in the spoken language. To achieve that, robots need robust gesture recognition systems to detect the non-verbal information that is sent to them by the human gestures. Traditional gesture recognition systems highly depend on the light conditions and often require a training process before they can be used. We have integrated a low-cost commercial RGB-D (Red Green Blue - Depth) sensor in a social robot to allow it to recognise dynamic gestures by tracking a skeleton model of the subject and coding the temporal signature of the gestures in a FSM (Finite State Machine). The vision system is independent of low light conditions and does not require a training process. Title Neural network based sensor drift compensation of induction motor Abstract In this paper, sensor drift compensation of vector control of induction motor using neural network is presented. An induction motor is controlled based on vector control. The sensors sense the primary feedback signals for the feedback control system which is processed by the controller. Any fault in the sensors cause incorrect measurements of feedback signals due to malfunction in sensor circuit elements which affects the system performance. Hence, sensor fault compensation or drift compensation is important for an electric drive. Analysis of sensor drift compensation in motor drives is done using neural networks. The feedback signals from the phase current sensors are given as the neural network input. The neural network then performs the auto-associative mapping of these signals so that its output is an estimate of the sensed signals. Since the Auto-associative neural network exploits the physical and analytical redundancy, whenever a sensor starts to drift, the drift is compensated at the output, and the performance of the drive system is barely affected. Title A fluid-suspension, electromagnetically driven eye with video capability for animatronic applications Abstract (Our work of the same title was initially published at "Humanoid '09" in Paris France, and should be referred to for details). We have prototyped a compact, fluid-suspension, electromagnetically-rotated animatronic eye. The Eye has no external moving parts, features low operating power, a range of motion and saccade speeds that can exceed that of the human eye, and an absence of frictional wear points. It supports a rear, stationary, video camera. In a special application, the eye can be separated into a hermetically sealable portion that might be used as a human eye prosthesis along with an extra-cranially-mounted magnetic drive. Title Software verification for TinyOS Abstract We describe the first software tool for the Title Context-aware robot navigation based on sensor association rules Abstract Within the mobile robotics research community, a great many approaches have been proposed for solving the navigation problem. The key difference between these various navigation architectures is the manner in which they decompose the problem into smaller subunits. In this paper, a data mining methodology developed for the retrieving significant frequent patterns is extended to allow robots to learn and navigate on unknown terrain in natural way. The method has two phases: context identification phase and validation phase. Conjunction of those phases provides an easy and straightforward way for exploring new workings space for robots. CCS Computer systems organization Embedded and cyber-physical systems System on a chip CCS Computer systems organization Embedded and cyber-physical systems Embedded systems CCS Computer systems organization Real-time systems Real-time operating systems CCS Computer systems organization Real-time systems Real-time languages CCS Computer systems organization Real-time systems Real-time system specification CCS Computer systems organization Real-time systems Real-time system architecture CCS Computer systems organization Dependable and fault-tolerant systems and networks Reliability Title An approach to improving the structure of error-handling code in the linux kernel Abstract The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where it can be reached by gotos whenever an error is detected. This coding style has the advantage of putting all of the error-handling code in one place, which eases understanding and maintenance, and reduces code duplication. Nevertheless, this coding style is not always applied. In this paper, we propose an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes. Title ACM SRC poster: gem: a formal dynamic environment for HPC pedagogy Abstract Computing is undergoing a dramatic shift from sequential to parallel processing. With this shift comes new challenges: how to debug code with multiple processes and threads, and how to effectively teach these programming concepts. Traditional testing tools are ineffective and inefficient when it comes to detecting deep seated logical bugs in parallel code, and often lack a GUI in popular IDE's. No support exists for teaching actual courses based on these tools either. In previous work, my research group provided an MPI testing tool called ISP and integrated it into Eclipse's PTP via the GEM plug-in. I expand on these with enhanced graphical interactions, interception of threaded behavior, and providing a tool for HPC Pedagogy. Title Dynamically scaling applications in the cloud Abstract Scalability is said to be one of the major advantages brought by the cloud paradigm and, more specifically, the one that makes it different to an "advanced outsourcing" solution. However, there are some important pending issues before making the dreamed automated scaling for applications come true. In this paper, the most notable initiatives towards whole application scalability in cloud environments are presented. We present relevant efforts at the edge of state of the art technology, providing an encompassing overview of the trends they each follow. We also highlight pending challenges that will likely be addressed in new research efforts and present an ideal scalable cloud system. Title Experience report: a do-it-yourself high-assurance compiler Abstract Embedded domain-specific languages (EDSLs) are an approach for quickly building new languages while maintaining the advantages of a rich metalanguage. We argue in this experience report that the "EDSL approach" can surprisingly ease the task of building a high-assurance compiler. We do not strive to build a fully formally-verified tool-chain, but take a "do-it-yourself" approach to increase our confidence in compiler-correctness without too much effort. Title Casting doubts on the viability of WiFi offloading Abstract With the advent of the smartphone, mobile data usage has exploded which in turn has created tremendous pressure on cellular data networks. A promising candidate to reduce the impact of cellular data growth is WiFi offloading. However, recent data from our study of two hundred student smartphone users casts doubts on the reductions that can be gained from WiFi offloading. Despite the users operating in a dense university WiFi environment, cellular consumption still dominated overall data usage. We believe the root cause of lesser WiFi utilization can be traced to the WiFi being optimized for laptop WiFi reception rather than the more constrained smartphone WiFi reception. Our work examines the relationship of WiFi versus 3G usage through a variety of aspects including active phone usage, application types, and traffic volume over an eight week period from the Spring of 2012. Title Protecting web applications from SQL injection attacks by using framework and database firewall Abstract SQL Injection attacks are the costly and critical attacks on web applications: it is a code injection technique that allows attackers to obtain unrestricted access to the databases and potentially sensitive information like usernames, passwords, email ids, credit card details present in them. Various techniques have been proposed to address the problem of SQL Injection attack such as defense coding practices, detection and prevention techniques, and intrusion detection systems. However most of these techniques have one or more disadvantages such as requirement for code modification, applicable to limited type of attacks and web applications. In this paper, we discuss a secure mechanism for protecting web applications from SQL Injection attacks by using framework and database firewall. This mechanism uses combined static and dynamic analysis technique. In static analysis, we list URLs, forms, injection points, and vulnerable parameters of web application. Thus, we identify valid queries that could be generated by the application. In dynamic analysis, we use database firewall to monitor runtime generated queries and check them against the whitelist of queries. The experimental setup makes use of real web applications and two open source tools namely Web Application Attack and Audit Framework (w3af) and GreenSQL. We used w3af for listing all the valid queries and GreenSQL as database firewall. The results show that implemented mechanism is capable of detecting all types of SQL Injection attacks without requiring any code modification to the existing web application but with an additional element of deploying a proxy. Title Procedure hopping: a low overhead solution to mitigate variability in shared-L1 processor clusters Abstract Variation in performance and power across manufactured parts and their operating conditions is a well-known issue in advanced CMOS processes. This paper proposes a resilient HW/SW architecture for shared-L1 processor clusters to combat both static and dynamic variations. We first introduce the notion of procedure-level vulnerability ( Title Fan-speed-aware scheduling of data intensive jobs Abstract As server processor power densities increase, the cost of air cooling also grows resulting from higher fan speeds. Our measurements show that vibrations induced by fans in high-end servers and its rack neighbors cause a dramatic drop in hard disk bandwidth, resulting in a corresponding decrease in application performance. In this paper we quantify the performance and energy cost effects of the fan vibrations and propose a disk performance aware thermal, energy and cooling technique. Results show that we can not only meet thermal constraints, but also improve performance by 1.35x as compared to the conventional methods. Title A game theoretic resource allocation for overall energy minimization in mobile cloud computing system Abstract Cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute code remotely. When the cloud infrastructure consists of heterogeneous servers, the mapping between mobile devices and servers plays an important role in determining the energy dissipation on both sides. From an environmental impact perspective, any energy dissipation related to computation should be counted. To achieve energy sustainability, it is important reducing the overall energy consumption of the mobile systems and the cloud infrastructure. Furthermore, reducing cloud energy consumption can potentially reduce the cost of mobile cloud users because the pricing model of cloud services is pay-by-usage. In this paper, we propose a game-theoretic approach to optimize the overall energy in a mobile cloud computing system. We formulate the energy minimization problem as a congestion game, where each mobile device is a player and his strategy is to select one of the servers to offload the computation while minimizing the overall energy consumption. We prove that the Nash equilibrium always exists in this game and propose an efficient algorithm that could achieve the Nash equilibrium in polynomial time. Experimental results show that our approach is able to reduce the total energy of mobile devices and servers compared to a random approach and an approach which only tries to reduce mobile devices alone. Title Reliability analysis in component-based development via probabilistic model checking Abstract Engineering of highly reliable systems requires support of sophisticated design methods allowing software architects to competently decide between various design alternatives already early in the development process. Architecture-based reliability prediction provides such capability. The formalisms and analytical methods employed by existing approaches are however often limited to a single reliability measure (the probability of failure on demand) and consideration of behavioural uncertainty (focusing on the uncertainty in model parameters, not the behaviour itself). This paper presents a formal reliability assessment approach for component-based systems based on the probabilistic model checking of various reliability-related properties specified in probabilistic linear temporal logic (PLTL). The systems are formalized as Markov decision processes (MDP), which allows software architects to encode behavioural uncertainties into the models in terms of nondeterministic (scheduler-decided) choices in the MDP. CCS Computer systems organization Dependable and fault-tolerant systems and networks Availability Title Dynamically scaling applications in the cloud Abstract Scalability is said to be one of the major advantages brought by the cloud paradigm and, more specifically, the one that makes it different to an "advanced outsourcing" solution. However, there are some important pending issues before making the dreamed automated scaling for applications come true. In this paper, the most notable initiatives towards whole application scalability in cloud environments are presented. We present relevant efforts at the edge of state of the art technology, providing an encompassing overview of the trends they each follow. We also highlight pending challenges that will likely be addressed in new research efforts and present an ideal scalable cloud system. Title Casting doubts on the viability of WiFi offloading Abstract With the advent of the smartphone, mobile data usage has exploded which in turn has created tremendous pressure on cellular data networks. A promising candidate to reduce the impact of cellular data growth is WiFi offloading. However, recent data from our study of two hundred student smartphone users casts doubts on the reductions that can be gained from WiFi offloading. Despite the users operating in a dense university WiFi environment, cellular consumption still dominated overall data usage. We believe the root cause of lesser WiFi utilization can be traced to the WiFi being optimized for laptop WiFi reception rather than the more constrained smartphone WiFi reception. Our work examines the relationship of WiFi versus 3G usage through a variety of aspects including active phone usage, application types, and traffic volume over an eight week period from the Spring of 2012. Title Procedure hopping: a low overhead solution to mitigate variability in shared-L1 processor clusters Abstract Variation in performance and power across manufactured parts and their operating conditions is a well-known issue in advanced CMOS processes. This paper proposes a resilient HW/SW architecture for shared-L1 processor clusters to combat both static and dynamic variations. We first introduce the notion of procedure-level vulnerability ( Title Fan-speed-aware scheduling of data intensive jobs Abstract As server processor power densities increase, the cost of air cooling also grows resulting from higher fan speeds. Our measurements show that vibrations induced by fans in high-end servers and its rack neighbors cause a dramatic drop in hard disk bandwidth, resulting in a corresponding decrease in application performance. In this paper we quantify the performance and energy cost effects of the fan vibrations and propose a disk performance aware thermal, energy and cooling technique. Results show that we can not only meet thermal constraints, but also improve performance by 1.35x as compared to the conventional methods. Title A game theoretic resource allocation for overall energy minimization in mobile cloud computing system Abstract Cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute code remotely. When the cloud infrastructure consists of heterogeneous servers, the mapping between mobile devices and servers plays an important role in determining the energy dissipation on both sides. From an environmental impact perspective, any energy dissipation related to computation should be counted. To achieve energy sustainability, it is important reducing the overall energy consumption of the mobile systems and the cloud infrastructure. Furthermore, reducing cloud energy consumption can potentially reduce the cost of mobile cloud users because the pricing model of cloud services is pay-by-usage. In this paper, we propose a game-theoretic approach to optimize the overall energy in a mobile cloud computing system. We formulate the energy minimization problem as a congestion game, where each mobile device is a player and his strategy is to select one of the servers to offload the computation while minimizing the overall energy consumption. We prove that the Nash equilibrium always exists in this game and propose an efficient algorithm that could achieve the Nash equilibrium in polynomial time. Experimental results show that our approach is able to reduce the total energy of mobile devices and servers compared to a random approach and an approach which only tries to reduce mobile devices alone. Title Distributed sensor data processing for many-cores Abstract Future many-core systems will rely heavily on a wide variety of sensors which provide run-time information about on-chip environment and workload. In this paper, a new dedicated infrastructure for distributed sensor processing for many-core systems is described. This infrastructure includes a sparse array of dedicated processors which evaluate on-chip sensor data and a two-level hierarchical network-on-chip (NoC) which allows for efficient sensor data collection. This design is evaluated using benchmark driven simulations for a three-dimensional (3D) stack, necessitating inter-layer sensor data communication. The experimental results for up to 1024 cores indicate that for typical sensor data collection rates, one sensor data processor (SDP) per 64 cores is optimal for sensor data latency. The use of a two-level NoC is shown to provide an average of 65% sensor data latency improvement versus a flat sensor data NoC structure for a 256-core system. Title NCS security experimentation using DETER Abstract Numerous efforts are underway to develop testing and experimentation tools to evaluate the performance of networked control systems (NCS) and supervisory control and data acquisition (SCADA) systems. These tools offer varying levels of fidelity and scale. Yet, researchers lack an experimentation framework for systematic testing and evaluation of NCS reliability and security under a wide range of failure scenarios. In this paper, we propose a modular experimentation framework that integrates the NCS semantics with the DETERLab cyber security experimentation facilities. We develop several attack scenarios with realistic network topology and network traffic configurations to evaluate the impact of denial of service (DoS) attacks on scalar linear systems. We characterize the impact of the attack dynamics on six plants located at various levels in a hierarchical topology. Our results suggest that emulation-based evaluations can provide novel insights about the network-induced security and reliability failures in large scale NCS. Title Surviving a search engine overload Abstract Search engines are an essential component of the web, but their web crawling agents can impose a significant burden on heavily loaded web servers. Unfortunately, blocking or deferring web crawler requests is not a viable solution due to economic consequences. We conduct a quantitative measurement study on the impact and cost of web crawling agents, seeking optimization points for this class of request. Based on our measurements, we present a practical caching approach for mitigating search engine overload, and implement the two-level cache scheme on a very busy web server. Our experimental results show that the proposed caching framework can effectively reduce the impact of search engine overload on service quality. Title Delta-FTL: improving SSD lifetime via exploiting content locality Abstract NAND flash-based SSDs suffer from limited lifetime due to the fact that NAND flash can only be programmed or erased for limited times. Among various approaches to address this problem, we propose to reduce the number of writes to the flash via exploiting the content locality between the write data and its corresponding old version in the flash. This content locality means, the new version, i.e., the content of a new write request, shares some extent of similarity with its old version. The information redundancy existing in the difference (delta) between the new and old data leads to a small compression ratio. The key idea of our approach, named Delta-FTL (Delta Flash Translation Layer), is to store this compressed delta in the SSD, instead of the original new data, in order to reduce the number of writes committed to the flash. This write reduction further extends the lifetime of SSDs due to less frequent garbage collection process, which is a significant write amplification factor in SSDs. Experimental results based on our Delta-FTL prototype show that Delta-FTL can significantly reduce the number of writes and garbage collection operations and thus improve SSD lifetime at a cost of trivial overhead on read latency performance. Title Automatic creation of VPN backup paths for improved resilience against BGP-attackers Abstract Virtual private networks (VPNs) play an integral role in corporate and governmental communication systems nowadays. As such they are by definition an exposed target for attacks on the availability of whole communication infrastructures. A comparably effective way to disturb VPNs is the announcement of the involved IP address ranges by compromised BGP routers. Since in the foreseeable future criminals may focus on such attacks, this article discusses the intelligent creation of backup paths in the context of VPNs as a countermeasure. The proposed system is evaluated in simulations as well as in a prototypic environment. CCS Computer systems organization Dependable and fault-tolerant systems and networks Maintainability and maintenance Title Dynamically scaling applications in the cloud Abstract Scalability is said to be one of the major advantages brought by the cloud paradigm and, more specifically, the one that makes it different to an "advanced outsourcing" solution. However, there are some important pending issues before making the dreamed automated scaling for applications come true. In this paper, the most notable initiatives towards whole application scalability in cloud environments are presented. We present relevant efforts at the edge of state of the art technology, providing an encompassing overview of the trends they each follow. We also highlight pending challenges that will likely be addressed in new research efforts and present an ideal scalable cloud system. Title Casting doubts on the viability of WiFi offloading Abstract With the advent of the smartphone, mobile data usage has exploded which in turn has created tremendous pressure on cellular data networks. A promising candidate to reduce the impact of cellular data growth is WiFi offloading. However, recent data from our study of two hundred student smartphone users casts doubts on the reductions that can be gained from WiFi offloading. Despite the users operating in a dense university WiFi environment, cellular consumption still dominated overall data usage. We believe the root cause of lesser WiFi utilization can be traced to the WiFi being optimized for laptop WiFi reception rather than the more constrained smartphone WiFi reception. Our work examines the relationship of WiFi versus 3G usage through a variety of aspects including active phone usage, application types, and traffic volume over an eight week period from the Spring of 2012. Title Procedure hopping: a low overhead solution to mitigate variability in shared-L1 processor clusters Abstract Variation in performance and power across manufactured parts and their operating conditions is a well-known issue in advanced CMOS processes. This paper proposes a resilient HW/SW architecture for shared-L1 processor clusters to combat both static and dynamic variations. We first introduce the notion of procedure-level vulnerability ( Title Fan-speed-aware scheduling of data intensive jobs Abstract As server processor power densities increase, the cost of air cooling also grows resulting from higher fan speeds. Our measurements show that vibrations induced by fans in high-end servers and its rack neighbors cause a dramatic drop in hard disk bandwidth, resulting in a corresponding decrease in application performance. In this paper we quantify the performance and energy cost effects of the fan vibrations and propose a disk performance aware thermal, energy and cooling technique. Results show that we can not only meet thermal constraints, but also improve performance by 1.35x as compared to the conventional methods. Title A game theoretic resource allocation for overall energy minimization in mobile cloud computing system Abstract Cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute code remotely. When the cloud infrastructure consists of heterogeneous servers, the mapping between mobile devices and servers plays an important role in determining the energy dissipation on both sides. From an environmental impact perspective, any energy dissipation related to computation should be counted. To achieve energy sustainability, it is important reducing the overall energy consumption of the mobile systems and the cloud infrastructure. Furthermore, reducing cloud energy consumption can potentially reduce the cost of mobile cloud users because the pricing model of cloud services is pay-by-usage. In this paper, we propose a game-theoretic approach to optimize the overall energy in a mobile cloud computing system. We formulate the energy minimization problem as a congestion game, where each mobile device is a player and his strategy is to select one of the servers to offload the computation while minimizing the overall energy consumption. We prove that the Nash equilibrium always exists in this game and propose an efficient algorithm that could achieve the Nash equilibrium in polynomial time. Experimental results show that our approach is able to reduce the total energy of mobile devices and servers compared to a random approach and an approach which only tries to reduce mobile devices alone. Title Distributed sensor data processing for many-cores Abstract Future many-core systems will rely heavily on a wide variety of sensors which provide run-time information about on-chip environment and workload. In this paper, a new dedicated infrastructure for distributed sensor processing for many-core systems is described. This infrastructure includes a sparse array of dedicated processors which evaluate on-chip sensor data and a two-level hierarchical network-on-chip (NoC) which allows for efficient sensor data collection. This design is evaluated using benchmark driven simulations for a three-dimensional (3D) stack, necessitating inter-layer sensor data communication. The experimental results for up to 1024 cores indicate that for typical sensor data collection rates, one sensor data processor (SDP) per 64 cores is optimal for sensor data latency. The use of a two-level NoC is shown to provide an average of 65% sensor data latency improvement versus a flat sensor data NoC structure for a 256-core system. Title NCS security experimentation using DETER Abstract Numerous efforts are underway to develop testing and experimentation tools to evaluate the performance of networked control systems (NCS) and supervisory control and data acquisition (SCADA) systems. These tools offer varying levels of fidelity and scale. Yet, researchers lack an experimentation framework for systematic testing and evaluation of NCS reliability and security under a wide range of failure scenarios. In this paper, we propose a modular experimentation framework that integrates the NCS semantics with the DETERLab cyber security experimentation facilities. We develop several attack scenarios with realistic network topology and network traffic configurations to evaluate the impact of denial of service (DoS) attacks on scalar linear systems. We characterize the impact of the attack dynamics on six plants located at various levels in a hierarchical topology. Our results suggest that emulation-based evaluations can provide novel insights about the network-induced security and reliability failures in large scale NCS. Title Surviving a search engine overload Abstract Search engines are an essential component of the web, but their web crawling agents can impose a significant burden on heavily loaded web servers. Unfortunately, blocking or deferring web crawler requests is not a viable solution due to economic consequences. We conduct a quantitative measurement study on the impact and cost of web crawling agents, seeking optimization points for this class of request. Based on our measurements, we present a practical caching approach for mitigating search engine overload, and implement the two-level cache scheme on a very busy web server. Our experimental results show that the proposed caching framework can effectively reduce the impact of search engine overload on service quality. Title Delta-FTL: improving SSD lifetime via exploiting content locality Abstract NAND flash-based SSDs suffer from limited lifetime due to the fact that NAND flash can only be programmed or erased for limited times. Among various approaches to address this problem, we propose to reduce the number of writes to the flash via exploiting the content locality between the write data and its corresponding old version in the flash. This content locality means, the new version, i.e., the content of a new write request, shares some extent of similarity with its old version. The information redundancy existing in the difference (delta) between the new and old data leads to a small compression ratio. The key idea of our approach, named Delta-FTL (Delta Flash Translation Layer), is to store this compressed delta in the SSD, instead of the original new data, in order to reduce the number of writes committed to the flash. This write reduction further extends the lifetime of SSDs due to less frequent garbage collection process, which is a significant write amplification factor in SSDs. Experimental results based on our Delta-FTL prototype show that Delta-FTL can significantly reduce the number of writes and garbage collection operations and thus improve SSD lifetime at a cost of trivial overhead on read latency performance. Title Automatic creation of VPN backup paths for improved resilience against BGP-attackers Abstract Virtual private networks (VPNs) play an integral role in corporate and governmental communication systems nowadays. As such they are by definition an exposed target for attacks on the availability of whole communication infrastructures. A comparably effective way to disturb VPNs is the announcement of the involved IP address ranges by compromised BGP routers. Since in the foreseeable future criminals may focus on such attacks, this article discusses the intelligent creation of backup paths in the context of VPNs as a countermeasure. The proposed system is evaluated in simulations as well as in a prototypic environment. CCS Computer systems organization Dependable and fault-tolerant systems and networks Processors and memory architectures CCS Computer systems organization Dependable and fault-tolerant systems and networks Secondary storage organization CCS Computer systems organization Dependable and fault-tolerant systems and networks Redundancy Title A noise-immune sub-threshold circuit design based on selective use of Schmitt-trigger logic Abstract Nanoscale circuits operating at sub-threshold voltages are affected by growing impact of random telegraph signal (RTS) and thermal noise. Given the low operational voltages and subsequently lower noise margins, these noise phenomena are capable of changing the value of some of the nodes in the circuit, compromising the reliability of the computation. We propose a method for improving noise-tolerance by selectively applying feed-forward reinforcement to circuits based on use of existing invariant relationships. As reinforcement mechanism, we used a modification of the standard CMOS gates based on the Schmitt trigger circuit. SPICE simulations show our solution offers better noise immunity than both standard CMOS and fully reinforced circuits, with limited area and power overhead. Title FFT-cache: a flexible fault-tolerant cache architecture for ultra low voltage operation Abstract Caches are known to consume a large part of total microprocessor power. Traditionally, voltage scaling has been used to reduce both dynamic and leakage power in caches. However, aggressive voltage reduction causes process-variation-induced failures in cache SRAM arrays, which compromise cache reliability. In this paper, we propose Flexible Fault-Tolerant Cache (FFT-Cache) that uses a flexible defect map to configure its architecture to achieve significant reduction in energy consumption through aggressive voltage scaling, while maintaining high error reliability. FFT-Cache uses a portion of faulty cache blocks as redundancy -- using block-level or line-level replication within or between sets to tolerate other faulty caches lines and blocks. Our configuration algorithm categorizes the cache lines based on degree of conflict of their blocks to reduce the granularity of redundancy replacement. FFT-Cache thereby sacrifices a minimal number of cache lines to avoid impacting performance while tolerating the maximum amount of defects. Our experimental results on SPEC2K benchmarks demonstrate that the operational voltage can be reduced down to 375mV, which achieves up to 80% reduction in dynamic power and up to 48% reduction in leakage power with small performance impact and area overhead. Title Low power robust signal processing Abstract Title Yield improvement and power aware low cost memory chips Abstract Memories are among the densest integrated circuits that can be fabricated and therefore, have the highest rate of defects. This paper discusses an efficient technique for designing low cost high defect tolerant RAM chips. A 25% improvement in the yield is presented. The paper proposes a scheme that selects the right redundancy in memory designs driven by the fabrication cost and the yield. The new memory chip design technique fills the gap between the existing all-or-none extremes with memories. The area is sacrificed for these performance improvements, for significant power savings as well as for the significant improvement in the yield. Title A new placement algorithm for the optimization of fault tolerant circuits on reconfigurable devices Abstract Reconfigurable logic devices such as SRAM-based Field Programmable Gate Arrays (FPGAs) are nowadays increasingly popular thanks to their capability of implementing complex circuits with very short development time and for their high versatility in implementing different kind of applications, ranging from signal processing to the networking. The usage of reconfigurable devices in safety critical fields such as space or avionics require the adoption of specific fault tolerant techniques, like Triple Modular Redundancy (TMR), in order to protect their functionality against radiation effects. While these techniques allow to increase the protection capability against radiation effects, they introduce several penalties to the design particularly in terms of performances. In this paper, we present an innovative placement algorithm able to implement fault tolerant circuits on SRAM-based FPGAs while reducing the performance penalties. This algorithm is based on a model-based topology heuristic that address the arithmetic modules implemented on the FPGA reducing the interconnection delays between their resources. Experimental evaluations performed by means of timing analysis and fault injection on two industrial-like case studies demonstrated that the proposed algorithm is able to improve the running frequency up to the 44% versus standard TMR-based techniques while maintaining complete fault tolerance capabilities. Title Trends in energy-efficiency and robustness using stochastic sensor network-on-a-chip Abstract The stochastic sensor network-on-chip (SSNOC) was recently proposed as an effective computational paradigm for jointly achieving energy-efficiency and robustness in nanoscale processes. In this paper, we study the trends in energy-efficiency and robustness exhibited by an SSNOC architecture as the feature size scales from 130nm to 32nm for a PN-code acquisition application. The conventional architecture exhibits a 3 orders-of-magnitude loss in detection probability P_{det} due to process variations in the 130nm and smaller technology nodes. At the 130nm and 90nm nodes, the proposed SSNOC architecture recovers from this performance loss, and exhibits a 2 orders-of-magnitude smaller variation in P_det compared to the conventional architecture. However, for the 65nm and 45nm technology nodes, the SSNOC architecture with assistance from circuit level techniques such as adaptive body bias (ABB) and adaptive supply voltage (ASV) shows a 2-3 order-of-magnitude better detection performance. In addition, the SSNOC architecture with ABB/ASV achieves 22% to 31% energy savings. For the 32nm node, the current version of SSNOC with ABB/ASV is not robust enough and thus motivates the need to explore even more powerful versions of SSNOC. Title Identifying sequential redundancies without search Abstract Title On the reliability of consensus-based fault-tolerant distributed computing systems Abstract NA 10 Citations Title Byzantine generals in action: implementing fail-stop processors Abstract CCS Computer systems organization Dependable and fault-tolerant systems and networks Fault-tolerant network topologies CCS Networks Network architectures Network design principles CCS Networks Network architectures Programming interfaces CCS Networks Network protocols Network protocol design Title The essential elements of successful innovation Abstract It has become a truism that innovation in the information and communications technology (ICT) fields is occurring faster than ever before. This paper posits that successful innovation requires three essential elements: a need, know-how or knowledge, and favorable economics. The paper examines this proposition by considering three technical areas in which there has been significant innovation in recent years: server virtualization and the cloud, mobile application optimization, and mobile speech services. An understanding of the elements that contribute to successful innovation is valuable to anyone that does either fundamental or applied research in fields of information and communication technology. Title Understanding bufferbloat in cellular networks Abstract Bufferbloat is a prevalent problem in the Internet where excessive buffers incur long latency, substantial jitter and sub-optimal throughput. This work provides the first elaborative understanding of bufferbloat in cellular networks. We carry out extensive measurements in the 3G/4G networks of the four major U.S. carriers to gauge the impact of bufferbloat in the field. Due to the bufferbloat problem, several pitfalls of current TCP protocols have been proposed in this paper. We also discover a trick employed by smart phone vendors to mitigate the issue and point out the limitations of such ad-hoc solutions. Our measurement study is coupled with theoretical analysis using queuing models. Finally, we comprehensively discuss candidate solutions to this problem and argue for a TCP-based end-to-end solution. Title Efficient and reliable low-power backscatter networks Abstract There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors. This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a Title On energy consumption analysis for ad hoc routing protocols Abstract The presence of untethered nodes and lack of backbone infrastructure in ad-hoc wireless networks makes energy a very critical resource. In this work we analyze the energy consumption during the various stages of routing - route discovery, message transmission and route maintenance. Since the routing strategies differ widely for different routing protocols we study some representative routing protocols. We examine the reactive protocols AODV and DSR and in the case of proactive protocols we examine OLSR and DSDV. Our analysis takes into account the wake-up schemes. For our analytical model to compute the total energy consumption we need the energy consumed by a node during transmission, reception or while it is idle. To obtain this values we simulated the protocols using ns-2. The experimental results were in conformance with the theoretical behaviour of the routing models. Title Enabling real-time interference alignment: promises and challenges Abstract As its name suggests, "interference alignment" is a class of transmission schemes that aligns multiple sources of interference to minimize its impact, thus aiming to maximize rate in an interference network. To our knowledge, this paper presents the first real-time implementation of interference alignment. Other implementation in the literature are either done offline or assume a backchannel between participating nodes to perform alignment. On the other hand, this paper presents a blind interference alignment scheme, one that does not require channel state information at the transmitters or the knowledge of other transmitters data or the knowledge of data between receivers and functions in real-time. Title DirectPath: high performance and energy efficient platform I/O architecture for content intensive usages Abstract With the widespread development of cloud computing and high speed communications, end users store or retrieve video, music, photo and other contents over the cloud or the local network for video-on-demand, wireless display and other usages. The traditional I/O model in a mobile platform consumes time and resources due to excessive memory access and copying when transferring content from a source device, e.g., network controller, to a destination device, e.g., hard disk. This model introduces unnecessary overhead and latency, negatively impacting the performance and energy consumption of content-intensive uses. In this paper, we introduce DirectPath, a low overhead I/O architecture that optimizes content movement within a platform to improve energy efficiency and throughput performance. We design, implement and validate the Direct-Path architecture for a network-to-storage file download usage model. We evaluate and quantify DirectPath's energy and performance benefits on both laptop and small form-factor SoC based platforms. The measurement results show that DirectPath reduces energy consumption by up to 50% and improves throughput performance by up to 137%. Title Enhanced guaranteed time slot algorithm for the IEEE802.15.4 standard Abstract The IEEE 802.15.4 Wireless Personal Area Networking standard, which was designed for low-data-rate and low-energy-demanding wireless networking technologies such as wireless sensor networks, provides a Guaranteed Time Slot (GTS) algorithm that offers the nodes requiring low-latency channel access to reserve contention-free time slots. However, the standard limits the number of simultaneously allocated GTSs to seven, which increases the failure rate of accommodating a growing demand for GTS allocation. This paper introduces an enhanced priority-based GTS algorithm for the IEEE 802.15.4 standard. The proposed algorithm allows the nodes with similar sensing tasks and within close proximity of each other, to prioritize their channel access based on their remaining energy. The performance of the proposed algorithm was experimentally evaluated and the results demonstrated an enhancement over the standard GST algorithm of up to 20% in both consumed energy and GTS allocation latency. Title Providing ubiquitous networks securely using host identity protocol (HIP) Abstract In an ideal ubiquitous network, anyone is supposed to be able to get connection to the Internet as long as some connectivity to the Internet exists there. A network administrator is supposed to provide a network to public visitors without any explicit permission like registration of users. At the same time, when an incident has occurred, such as an illegal access by an user, the network administrator needs to be able to trace the user, and to clear who the user is. Additionally, the proof that the network administrator has not commit incorrect accesses should be ensured because the network administrator is not trusted. That is to say, nonrepudiation should be ensured. To solve these problems, we apply Host Identity Protocol (HIP) to implement secure ubiquitous networks. In our network, users can connect only by HIP. We propose an authentication that does not impose management work on the network administrator. We would like to discuss how the network administrator can ensure nonrepudiation, and the traceability of users. Title An internet protocol stack for high-speed transmission in a non-OS environment Abstract Today's embedded systems are required to have stronger wired/wireless communication capability. Due to strict limitations on how they utilize resources such as power, address space, and processing ability, network protocols for embedded systems are designed to make the best use of constrained resources. They are developed in a way that allows for support from a range of operating systems and thus utilizes platform-independent architectures. This paper presents an Internet protocol stack design for embedded systems that operate without an operating system. Our scheme schedules the transmission and the reception of the data packets and uses cross-layer optimization. We implemented the proposed scheme in a LTE network device. The overhead of our scheme is low and it subsequently meets the constraint of transmission speed in the next generation mobile communication environments. Title On realization of reliable link layer protocols with guaranteed sustainable flows for wireless communication Abstract Despite major developments in link-layer design to address reliability issues associated with the wireless communication (in a presence of heavy noise), these efforts fall short on many fronts. This includes a clear demonstration regarding the viability of a truly reliable link-layer capable of providing a minimal level of guaranteed sustainable flows for the higher layers. In this paper, we present an analytical and experimental study to design and implement a reliable wireless link-layer that provides sustainable flow control. We develop an experimental platform using Software Radio Defined (SDR) technology with the Universal Software Radio Peripheral (USRP) frontend to capture and measure the behavior of an error process imposed on a wireless channel. Next, we design a Reliable And StablE (RASE) link-layer protocol to provide reliability (by achieving optimal throughput) and stability (by ensuring a sustainable traffic flow) for realtime and non-realtime wireless communications. We then incorporate the RASE protocol into the SDR-USRP platform to investigate the level of throughput and realtime stability achieved in comparison with the IEEE802.11 ARQ and the FEC-based HARQ protocols. We demonstrate experimentally that RASE provides 20%-50% improved reliability. In addition, realtime video communication experiments show a 2-8dB PSNR gain in playback quality. CCS Networks Network protocols Protocol correctness CCS Networks Network protocols Link-layer protocols Title Study of chord model based on hybrid structure Abstract Chord is a structured P2P model, it can quickly find the location of resources, but the search process of network inconsistent with the actual physical address caused a delay of query, when the difference between nodes' capacity is large, this will affect the stability of the network. Hybrid P2P took into account the differences of nodes' capacity, but with blind check. This paper presents a hybrid structure based on chord system. To a certain extent, this will solve the stability problem of chord, routing problems, and the efficiency of the hybrid P2P structure query. Title Performance comparison of 3G and metro-scale WiFi for vehicular network access Abstract We perform a head-to-head comparison of the performance characteristics of a 3G network operated by a nation-wide provider and a metro-scale WiFi network operated by a commercial ISP, from the perspective of vehicular network access. Our experience shows that over a wide geographic region and under vehicular mobility, these networks exhibit very different throughput and coverage characteristics. WiFi has frequent disconnections even in a commercially operated, metro-scale deployment; but when connected, indeed delivers high throughputs even in a mobile scenario. The 3G network offers similar or lower throughputs in general, but provides excellent coverage and less throughput variability. The two network characteristics are often complementary. It is conceivable that these properties can be judiciously exploited for a hybrid network design where 3G data can be offloaded to WiFi for better performance and to reduce 3G network congestion and to lower costs. Title Evaluation of hardware and software schedulers for embedded switches Abstract High-speed packet switches become increasingly important to embedded systems because they provide multiple parallel data paths necessary in emerging systems such as embedded multiprocessors, multiprotocol communication processors, and so on. The most promising architecture for embedded switches is the one that uses multiple input queues, due to its low-cost integration in conventional embedded systems, which include memory management subsystems. Such switches require high-speed schedulers, in order to resolve conflicts among packet destinations and to achieve low latency, high bandwidth communication, while providing fairness guarantees. In general, these schedulers are categorized as centralized or distributed, depending on their operation. In this paper, we evaluate hardware and software implementations of two schedulers: 2-dimensional round-robin and FIRM, which are centralized and distributed, respectively. The evaluation is performed for embedded system implementation, on a system that includes an FPGA and an embedded processor on-chip. The performance results show that, in contrast to expectations, centralized schedulers provide better performance than distributed ones in hardware implementations. In software implementations for embedded processors, surprisingly, distributed schedulers achieve better performance, due to better management of the processor's limited resources and simpler code; our experiments have shown that compilers for embedded systems are quite limited and require significant improvement. Finally, we evaluate the scalability of the schedulers, in terms of throughput, circuit complexity, and power consumption, based on implementation technology, considering the dramatic improvements expected in the availability of high-speed programmable logic and embedded processors on the same chip. Title Measurement of ATM frame latency Abstract Title ATM: a retrospective on systems legacy or "a technology with a fabulous future behind it?" Abstract Title A retrospective view of ATM Abstract Title Rationalizing key design decisions in the ATM user plane Abstract Title A perspective on how ATM lost control Abstract Title The influence of ATM on operating systems Abstract Title Randomized k-set agreement Abstract The In the case of the consensus problem, two main approaches have been investigated to circumvent this impossibility: randomization and unreliable failure detectors. For the more general case of the This paper presents a randomization approach to solve the CCS Networks Network protocols Network layer protocols CCS Networks Network protocols Transport protocols Title Internet and the Erlang formula Abstract We demonstrate that the Internet has a formula linking demand, capacity and performance that in many ways is the analogue of the Erlang loss formula of telephony. Surprisingly, this formula is none other than the Erlang delay formula. It provides an upper bound on the probability a flow of given peak rate suffers degradation when bandwidth sharing is max-min fair. Apart from the flow rate, the only relevant parameters are link capacity and overall demand. We explain why this result is valid under a very general and realistic traffic model and discuss its significance for network engineering. Title "Network Neutrality": the meme, its cost, its future Abstract In June 2011 I participated on a panel on network neutrality hosted at the June cybersecurity meeting of the DHS/SRI Infosec Technology Transition Council (ITTC), where "experts and leaders from the government, private, financial, IT, venture capitalist,and academia and science sectors came together to address the problem of identity theft and related criminal activity on the Internet." I recently wrote up some of my thoughts on that panel, including what network neutrality has to do with cybersecurity. Title Exploring mobile/WiFi handover with multipath TCP Abstract Mobile Operators see an unending growth of data traffic generated by their customers on their mobile data networks. As the operators start to have a hard time carrying all this traffic over 3G or 4G networks, offloading to WiFi is being considered. Multipath TCP (MPTCP) is an evolution of TCP that allows the simultaneous use of multiple interfaces for a single connection while still presenting a standard TCP socket API to the application. The protocol specification of Multipath TCP has foreseen the different building blocks to allow transparent handover from WiFi to 3G back and forth. In this paper we experimentally prove the feasibility of using MPTCP for mobile/WiFi handover in the current Internet. Our experiments run over real WiFi/3G networks and use our Linux kernel implementation of MPTCP that we enhanced to better support handover. We analyze MPTCP's energy consumption and handover performance in various operational modes. We find that MPTCP enables smooth handovers offering reasonable performance even for very demanding applications such as VoIP. Finally, our experiments showed that lost MPTCP control signals can adversely affect handover performance; we implement and test a simple but effective solution to this issue. Title Adaptive scalable video streaming in wireless networks Abstract In this paper, we investigate the optimal streaming strategy for dynamic adaptive streaming over HTTP (DASH). Specifically, we focus on the rate adaptation algorithm for streaming scalable video (H.264/SVC) in wireless networks. We model the rate adaptation problem as a Markov Decision Process (MDP), aiming to find an optimal streaming strategy in terms of user-perceived quality of experience (QoE) such as playback interruption, average playback quality and playback smoothness. We then obtain the optimal MDP solution using dynamic programming. We further define a reward parameter in our proposed streaming strategy, which can be adjusted to make a good trade-off between the average playback quality and playback smoothness. We also use a simple testbed to validate our solution. Experiment results show the feasibility of the proposed solution and its advantage over the existing work. Title ACCENT: Cognitive cryptography plugged compression for SSL/TLS-based cloud computing services Abstract Emerging cloud services, including mobile offices, Web-based storage services, and content delivery services, run diverse workloads under various device platforms, networks, and cloud service providers. They have been realized on top of SSL/TLS, which is the de facto protocol for end-to-end secure communication over the Internet. In an attempt to achieve a cognitive SSL/TLS with heterogeneous environments (device, network, and cloud) and workload awareness, we thoroughly analyze SSL/TLS-based data communication and identify three critical mismatches in a conventional SSL/TLS-based data transmission. The first mismatch is the performance of loosely coupled encryption-compression and communication routines that lead to underutilized computation and communication resources. The second mismatch is that the conventional SSL/TLS only provides a static compression mode, irrespective of the dynamically changing status of each SSL/TLS connection and the computing power gap between the cloud service provider and diverse device platforms. The third is the memory allocation overhead due to frequent compression switching in the SSL/TLS. As a remedy to these rudimentary operations, we present a system called an Adaptive Cryptography Plugged Compression Network (ACCENT) for SSL/TLS-based cloud services. It is comprised of the following three novel mechanisms, each of which aims to provide an optimal SSL/TLS communication and maximize the network transfer performance of an SSL/TLS protocol stack: tightly-coupled threaded SSL/TLS coding, floating scale-based adaptive compression negotiation, and unified memory allocation for seamless compression switching. We implemented and tested the mechanisms in OpenSSL-1.0.0. ACCENT is integrated into the Web-interface layer and SSL/TLS-based secure storage service within a real cloud computing service, called Title Torii HLMAC: distributed, fault-tolerant, zero configuration data center architecture with multiple tree-based addressing and forwarding Abstract This paper describes Torii-HLMAC, a scalable, fault-tolerant, zero-configuration data center network fabric architecture (currently under final evaluation) as a full distributed alternative to Portland for similar multiple tree (fat tree) network topologies. It uses multiple, fixed, tree-based positional MAC addresses, used for multiple path table-free forwarding. Addresses are assigned by simple extension of the Rapid Spanning Tree Protocol. Torii-HLMAC enhances the Portland protocol advantages of scalability, zero configuration and high performance and adds instant path recovery, distributed address assignment. ARP broadcast may use ARP Proxy. Title Bufferbloat: Dark Buffers in the Internet Abstract Title Pingin' in the rain Abstract Residential Internet connections are susceptible to weather-caused outages: Lightning and wind cause local power failures, direct lightning strikes destroy equipment, and water in the atmosphere degrades satellite links. Outages caused by severe events such as fires and undersea cable cuts are often reported upon by operators and studied by researchers. In contrast, outages cause by ordinary weather are typically limited in scope, and because of their small scale, there has not been comparable effort to understand how weather affects everyday last-mile Internet connectivity. We design and deploy a measurement tool called ThunderPing that measures the connectivity of residential Inter- net hosts before, during, and after forecast periods of severe weather. ThunderPing uses weather alerts from the US National Weather Service to choose a set of residential host addresses to ping from several vantage points on the Internet. We then process this ping data to determine when hosts lose connectivity, completely or partially, and categorize whether these failures occur during periods of severe weather or when the skies are clear. In our preliminary results, we find that compared to clear weather, failures are four times as likely during thunderstorms and two times as likely during rain. We also find that the duration of weather induced outages is relatively small for a satellite provider we focused on. Title Web content cartography Abstract Recent studies show that a significant part of Internet traffic is delivered through Web-based applications. To cope with the increasing demand for Web content, large scale content hosting and delivery infrastructures, such as data-centers and content distribution networks, are continuously being deployed. Being able to identify and classify such hosting infrastructures is helpful not only to content producers, content providers, and ISPs, but also to the research community at large. For example, to quantify the degree of hosting infrastructure deployment in the Internet or the replication of Web content. In this paper, we introduce Web Content Cartography, i.e., the identification and classification of content hosting and delivery infrastructures. We propose a lightweight and fully automated approach to discover hosting infrastructures based only on DNS measurements and BGP routing table snapshots. Our experimental results show that our approach is feasible even with a limited number of well-distributed vantage points. We find that some popular content is served exclusively from specific regions and ASes. Furthermore, our classification enables us to derive content-centric AS rankings that complement existing AS rankings and shed light on recent observations about shifts in inter-domain traffic and the AS topology. Title Implementing ARP-path low latency bridges in NetFPGA Abstract The demo is focused on the implementation of ARP-Path (a.k.a. FastPath) bridges, a recently proposed concept for low latency bridges. ARP-Path Bridges rely on the race between broadcast ARP Request packets, to discover the minimum latency path to the destination host. Several implementations (in Omnet++, Linux, OpenFlow, NetFPGA) have shown that ARP-Path exhibits loop-freedom, does not block links, is fully transparent to hosts and neither needs a spanning tree protocol to prevent loops nor a link state protocol to obtain low latency paths. This demo compares our hardware implementation on NetFPGA to bridges running STP, showing that ARP-Path finds lower latency paths than STP. CCS Networks Network protocols Session protocols CCS Networks Network protocols Presentation protocols CCS Networks Network protocols Application layer protocols CCS Networks Network protocols OAM protocols CCS Networks Network protocols Cross-layer protocols CCS Networks Network protocols Network File System (NFS) protocol CCS Networks Network components Intermediate nodes CCS Networks Network components Physical links CCS Networks Network components Middle boxes / network appliances CCS Networks Network components End nodes CCS Networks Network components Wireless access points, base stations and infrastructure CCS Networks Network components Logical nodes CCS Networks Network algorithms Data path algorithms CCS Networks Network algorithms Control path algorithms CCS Networks Network algorithms Network economics CCS Networks Network performance evaluation Network performance modeling CCS Networks Network performance evaluation Network simulations CCS Networks Network performance evaluation Network experimentation CCS Networks Network performance evaluation Network performance analysis CCS Networks Network performance evaluation Network measurement CCS Networks Network properties Network security CCS Networks Network properties Network range CCS Networks Network properties Network structure CCS Networks Network properties Network dynamics CCS Networks Network properties Network reliability CCS Networks Network properties Network mobility CCS Networks Network properties Network manageability CCS Networks Network properties Network privacy and anonymity CCS Networks Network services Naming and addressing CCS Networks Network services Cloud computing CCS Networks Network services Location based services CCS Networks Network services Programmable networks CCS Networks Network services In-network processing Title Performance of a conservative simulator of ATM networks Abstract Title Second moment resource allocation in multi-service networks Abstract Title A managerial analysis of ATM in facilitating distance education Abstract Title Mobility management in integrated wireless-ATM networks Abstract Title The shadow cluster concept for resource allocation and call admission in ATM-based wireless networks Abstract Title SpectrumWare: a software-oriented approach to wireless signal processing Abstract Title Rednet: a wireless ATM local area network using infrared links Abstract Title An architectural approach for integrated network and systems management Abstract Title Issues in distributed control for ATM networks Abstract NA Title Rate-based congestion control for ATM networks Abstract CCS Networks Network services Network management Title Discovering configuration templates of virtualized tenant networks in multi-tenancy datacenters via graph-mining Abstract Multi-tenant datacenter networking, with which multiple customer (tenant) networks are virtualized over a single shared physical infrastructure, is cost-effective but poses significant costs on manual configuration. Such tasks would be alleviated with configuration templates, whereas a crucial difficulty stems from creating appropriate (i.e., reusable) ones. In this work, we propose a graph-based method of mining configurations of existing tenants to extract their recurrent patterns that would be used as reusable templates for upcoming tenants. The effectiveness of the proposed method is demonstrated with actual configuration files obtained from a business datacenter network. Title Refactoring network infrastructure to improve manageability: a case study of home networking Abstract Managing a home network is challenging because the underlying infrastructure is so complex. Existing interfaces either hide or expose the network's underlying complexity, but in both cases, the information that is shown does not necessarily allow a user to complete desired tasks. Recent advances in software defined networking, however, permit a redesign of the underlying network and protocols, potentially allowing designers to move complexity further from the user and, in some cases, eliminating it entirely. In this paper, we explore whether the choices of what to make visible to the user in the design of today's home network infrastructure, performance, and policies make sense. We also examine whether new capabilities for refactoring the network infrastructure - changing the underlying system without compromising existing functionality - should cause us to revisit some of these choices. Our work represents a case study of how co-designing an interface and its underlying infrastructure could ultimately improve interfaces for that infrastructure. Title Towards a cost model for network traffic Abstract We develop a holistic cost model that operators can use to help evaluate the costs of various routing and peering decisions. Using real traffic data from a large carrier network, we show how network operators can use this cost model to significantly reduce the cost of carrying traffic in their networks. We find that adjusting the routing for a small fraction of total flows (and total traffic volume) significantly reduces cost in many cases. We also show how operators can use the cost model both to evaluate potential peering arrangements and for other network operations problems. Title A history of an internet exchange point Abstract In spite of the tremendous amount of measurement efforts on understanding the Internet as a global system, little is known about the 'local' Internet (among ISPs inside a region or a country) due to limitations of the existing measurement tools and scarce data. In this paper, empirical in nature, we characterize the evolution of one such ecosystem of local ISPs by studying the interactions between ISPs happening at the Slovak Internet eXchange (SIX). By crawling the web archive waybackmachine.org we collect 158 snapshots (spanning 14 years) of the SIX website, with the relevant data that allows us to study the dynamics of the Slovak ISPs in terms of: the local ISP peering, the traffic distribution, the port capacity/utilization and the local AS-level traffic matrix. Examining our data revealed a number of invariant and dynamic properties of the studied ecosystem that we report in detail. Title Experimentation made easy with the AMazING panel Abstract Experimental testbeds for evaluating solutions in computer networks, are today required as a complement to simulation and emulation. As these testbeds become larger, and accessible to a broader universe of the research community, dedicated management tools become mandatory. These tools ease the complex management of the testbed specific resources, while providing an environment for researchers to define their experiments with large flexibility. While there are currently several management tools, the research community is still lacking tools that smooth the experimentation workflow. These were key aspects that we considered when developing the management infrastructure for our wireless testbed(AMazING). We developed a experimentation support framework supported by an attractive GUI, automation and scripting capabilities, as well as experiment versioning and integrated result gathering and analysis. Title Towards utility-based resource management in heterogeneous wireless networks Abstract This work considers the network selection and the bandwidth assignment problems in the context of heterogeneous wireless networks operated by a single service provider. The current commercial and research practices on resource management are presented, and a novel utility-based approach supporting multiple client classes is introduced. A bandwidth sharing policy and a "controlled unfairness" scheme is achieved by combining distinct priority classes with logarithmic utility functions that variably grade the bandwidth allocated to clients. To demonstrate the possibilities of this approach an optimisation problem is formulated and its solution utility-optimally allocates bandwidth and distributes clients to base stations, modelling a network-side resource management system. The centralised operation allows for network-wide provision of comparable service levels to clients of the same class. The optimal solution is compared to several heuristic methods. Simulations showcase the behaviour of the system, which successfully differentiates clients and utilises all available resources. Title Demonstrating the AMazING panel Abstract Title Hierarchical policies for software defined networks Abstract Hierarchical policies are useful in many contexts in which resources are shared among multiple entities. Such policies can easily express the delegation of authority and the resolution of conflicts, which arise naturally when decision-making is decentralized. Conceptually, a hierarchical policy could be used to manage network resources, but commodity switches, which match packets using flow tables, do not realize hierarchies directly. This paper presents Title NetPilot: automating datacenter network failure mitigation Abstract Driven by the soaring demands for always-on and fast-response online services, modern datacenter networks have recently undergone tremendous growth. These networks often rely on commodity hardware to reach immense scale while keeping capital expenses under check. The downside is that commodity devices are prone to failures, raising a formidable challenge for network operators to promptly handle these failures with minimal disruptions to the hosted services. Recent research efforts have focused on automatic failure localization. Yet, resolving failures still requires significant human interventions, resulting in prolonged failure recovery time. Unlike previous work, NetPilot aims to quickly Title AutoNetkit: simplifying large scale, open-source network experimentation Abstract We present a methodology that brings simplicity to large and complex test labs by using abstraction. The networking community has appreciated the value of large scale test labs to explore complex network interactions, as seen in projects such as PlanetLab, GENI, DETER, Emulab, and SecSI. Virtualization has enabled the creation of many more such labs. However, one problem remains: it is time consuming, tedious and error prone to setup and configure large scale test networks. Separate devices need to be configured in a coordinated way, even in a virtual lab. AutoNetkit, an open source tool, uses abstractions and defaults to achieve both configuration and deployment and create such large-scale virtual labs. This allows researchers and operators to explore new protocols, create complex models of networks and predict consequences of configuration changes. However, our abstractions could also allow the discussion of the broader configuration management problem. Abstractions that currently configure networks in a test lab can, in the future, be employed in configuration management tools for real networks. CCS Networks Network services Network monitoring Title iNFAnt: NFA pattern matching on GPGPU devices Abstract This paper presents iNFAnt, a parallel engine for regular expression pattern matching. In contrast with traditional approaches, iNFAnt adopts non-deterministic automata, allowing the compilation of very large and complex rule sets that are otherwise hard to treat. iNFAnt is explicitly designed and developed to run on graphical processing units that provide large amounts of concurrent threads; this parallelism is exploited to handle the non-determinism of the model and to process multiple packets at once, thus achieving high performance levels. Title The 2nd workshop on active internet measurements (AIMS-2) report Abstract On February 8-10, 2010, CAIDA hosted the second Workshop on Active Internet Measurements (AIMS-2) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. The goals of this workshop were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to addressing future data needs of the network and security research communities. The three-day workshop included presentations, group discussion and analysis, and focused interaction between participating researchers, operators, and policymakers from all over the world. This report describes the motivation and findings of the workshop, and reviews progress on recommendations developed at the 1st Active Internet Measurements Workshop in 2009 [18]. Slides from the workshop presentations are available at [9]. Title NeTraMark: a network traffic classification benchmark Abstract Recent research on Internet traffic classification has produced a number of approaches for distinguishing types of traffic. However, a rigorous comparison of such proposed algorithms still remains a challenge, since every proposal considers a different benchmark for its experimental evaluation. A lack of clear consensus on an objective and cientific way for comparing results has made researchers uncertain of fundamental as well as relative contributions and limitations of each proposal. In response to the growing necessity for an objective method of comparing traffic classifiers and to shed light on scientifically grounded traffic classification research, we introduce an Internet traffic classification benchmark tool, NeTraMark. Based on six design guidelines (Comparability, Reproducibility, Efficiency, Extensibility, Synergy, and Flexibility/Ease-of-use), NeTraMark is the first Internet traffic lassification benchmark where eleven different state-of-the-art traffic classifiers are integrated. NeTraMark allows researchers and practitioners to easily extend it with new classification algorithms and compare them with other built-in classifiers, in terms of three categories of performance metrics: per-whole-trace flow accuracy, per-application flow accuracy, and computational performance. Title Explaining packet delays under virtualization Abstract This paper performs controlled experiments with two popular virtualization techniques, Linux-VServer and Xen, to examine the effects of virtualization on packet sending and receiving delays. Using a controlled setting allows us to independently investigate the influence on delay measurements when competing virtual machines (VMs) perform tasks that consume CPU, memory, I/O, hard disk, and network bandwidth. Our results indicate that heavy network usage from competing VMs can introduce delays as high as 100 ms to round-trip times. Furthermore, virtualization adds most of this delay when sending packets, whereas packet reception introduces little extra delay. Based on our findings, we discuss guidelines and propose a feedback mechanism to avoid measurement bias under virtualization. Title The 4th workshop on active internet measurements (AIMS-4) report Abstract On February 8-10, 2012, CAIDA hosted the fourth Workshop on Active Internet Measurements (AIMS-4) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with the previous three AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security operations and research communities. This year we continued to focus on how measurement can illuminate two specific public policy concerns: IPv6 deployment and broadband performance. This report briefly describes topics discussed at this year's workshop. Slides and other materials related to the workshop are available at http://www.caida.org/. Title On-demand time-decaying bloom filters for telemarketer detection Abstract Several traffic monitoring applications may benefit from the availability of efficient mechanisms for approximately tracking smoothed time averages rather than raw counts. This paper provides two contributions in this direction. First, our analysis of Time-decaying Bloom filters, formerly proposed data structures devised to perform approximate Exponentially Weighted Moving Averages on streaming data, reveals two major shortcomings: biased estimation when measurements are read in arbitrary time instants, and slow operation resulting from the need to periodically update all the filter's counters at once. We thus propose a new construction, called On-demand Time-decaying Bloom filter, which relies on a continuous-time operation to overcome the accuracy/performance limitations of the original window-based approach. Second, we show how this new technique can be exploited in thedesign of high performance stream-based monitoring applications, by developing VoIPSTREAM, a proof-of-concept real-time analysis version of a formerly proposed system for telemarketing call detection. Our validation results, carried out over real telephony data, show how VoIPSTREAM closely mimics the feature extraction process and traffic analysis techniques implemented in the offline system, at a significantly higher processing speed, and without requiring any storage of per-user call detail records. Title Extracting benefit from harm: using malware pollution to analyze the impact of political and geophysical events on the internet Abstract Unsolicited one-way Internet traffic, also called Internet background radiation (IBR), has been used for years to study malicious activity on the Internet, including worms, DoS attacks, and scanning address space looking for vulnerabilities to exploit. We show how such traffic can also be used to analyze macroscopic Internet events that are unrelated to malware. We examine two phenomena: country-level censorship of Internet communications described in recent work, and natural disasters (two recent earthquakes). We introduce a new metric of local IBR activity based on the number of unique IP addresses per hour contributing to IBR. The advantage of this metric is that it is not affected by bursts of traffic from a few hosts. Although we have only scratched the surface, we are convinced that IBR traffic is an important building block for comprehensive monitoring, analysis, and possibly even detection of events unrelated to the IBR itself. In particular, IBR offers the opportunity to monitor the impact of events such as natural disasters on network infrastructure, and in particular reveals a view of events that is complementary to many existing measurement platforms based on (BGP) control-plane views or targeted active ICMP probing. Title pcapIndex: an index for network packet traces with legacy compatibility Abstract Long-term historical analysis of captured network traffic is a topic of great interest in network monitoring and network security. A critical requirement is the support for fast discovery of packets that satisfy certain criteria within large-scale packet repositories. This work presents the first indexing scheme for network packet traces based on compressed bitmap indexing principles. Our approach supports very fast insertion rates and results in compact index sizes. The proposed indexing methodology builds upon libpcap, the de-facto reference library for accessing packet-trace repositories. Our solution is therefore backward compatible with any solution that uses the original library. We experience impressive speedups on packet-trace search operations: our experiments suggest that the index-enabled libpcap may reduce the packet retrieval time by more than 1100 times. Title Border gateway protocol (BGP) and traceroute data workshop report Abstract On Monday, 22 August 2011, CAIDA hosted a one-day workshop to discuss scalable measurement and analysis of BGP and traceroute topology data, and practical applications of such data analysis including tracking of macroscopic censorship and filtering activities on the Internet. Discussion topics included: the surprisingly stability in the number of BGP updates over time; techniques for improving measurement and analysis of inter-domain routing policies; an update on Colorado State's BGPMon instrumentation; using BGP data to improve the interpretation of traceroute data, both for real-time diagnostics (e.g., AS traceroute) and for large-scale topology mapping; using both BGP and traceroute data to support detection and mapping infrastructure integrity, including different types of of filtering and censorship; and use of BGP data to analyze existing and proposed approaches to securing the interdomain routing system. This report briefly summarizes the presentations and discussions that followed. Title AirLab: consistency, fidelity and privacy in wireless measurements Abstract Accurate measurements of deployed wireless networks are vital for researchers to perform realistic evaluation of proposed systems. Unfortunately, the difficulty of performing detailed measurements limits the consistency in parameters and methodology of current datasets. Using different datasets, multiple research studies can arrive at conflicting conclusions about the performance of wireless systems. Correcting this situation requires consistent and comparable wireless traces collected from a variety of deployment environments. In this paper, we describe AirLab, a distributed wireless data collection infrastructure that uses uniformly instrumented measurement nodes at heterogeneous locations to collect consistent traces of both standardized and user-defined experiments. We identify four challenges in the AirLab platform, consistency, fidelity, privacy, security, and describe our approaches to address them. CCS Networks Network types Network on chip CCS Networks Network types Home networks CCS Networks Network types Storage area networks CCS Networks Network types Data center networks CCS Networks Network types Wired access networks Title A demonstration of ultra-low-latency data center optical circuit switching Abstract We designed and constructed a 24x24-port optical circuit switch (OCS) prototype with a programming time of 68.5 μs, a switching time of 2.8 μs, and a receiver electronics initialization time of 8.7 μs [1]. We demonstrate the operation of this prototype switch in a data center testbed under various workloads. Title Using a hybrid honey bees mating optimisation algorithm for solving SONET/SDH design problems Abstract In this paper we propose a hybrid Honey Bees Mating Optimisation (HBMO) algorithm to solve two problems that arise in the design of optical telecommunication networks known as SONET/SDH Ring Assignment Problem (SRAP) and Intraring Synchronous Optical Network Design Problem (IDP). In SRAP the objective is to minimise the number of rings. In IDP the objective is to minimise the number of Add-Drop Multiplexers (ADMs). Both problems are subject to a ring capacity constraint. HBMO algorithm simulates the mating process of real honey bees. We apply a hybridisation of HBMO to solve these two combinatorial optimisation problems. The feasibility of Hybrid HBMO is demonstrated and compared with the solutions obtained by other algorithms from literature. Title Dynamic connection provisioning with signal quality guaranteed in all-optical networks Abstract This paper studies the problem of signal-quality-guaranteed connection provisioning in all-optical networks under dynamic traffic. We first improve a model integrated with two main signal quality constraints; then propose an impairment information management framework to adaptively gathering and updating interference among lightpaths. Based on that framework, we developed two impairment-aware routing and wavelength assignment (RWA) algorithms. The network simulator OMNET++ is used to simulate the proposed approach on a typical optical network topology. Simulation results show that the proposed impairment ware RWA algorithms significantly outperform the original conventional scheme while implying a reasonable computational requirement. Title A wavelength intersection cardinality based routing and wavelength assignment algorithm in optical WDM networks Abstract Optimum route selection and wavelength assignment is a key aspect in Wavelength Division Multiplexed (WDM) optical network. In general the traffic forwarding decision under adaptive routing algorithms is function of the number of free channels on the links. This doesn't ensure the common wavelength availability on entire path. This paper presents a novel approach for traffic forwarding decision which ensures the wavelength continuity constraint between source and the destination node. The path selection algorithm used in this paper establishes the light path with less computational overhead and gives better throughput. The proposed algorithm is compared with traditional adaptive routing algorithm through computer simulation. Observations reveal that the new algorithm performs better by dipping Average Blocking Time of the Network (ABTN). In particular, it outperforms at higher traffic rates with limited number of channels per link. Title Proteus: a topology malleable data center network Abstract Full-bandwidth connectivity between all servers of a data center may be necessary for all-to-all traffic patterns, but such interconnects suffer from high cost, complexity, and energy consumption. Recent work has argued that if all-to-all traffic is uncommon, oversubscribed network architectures that can adapt the topology to meet traffic demands, are sufficient. In line with this work, we propose Proteus, an all-optical architecture targeting unprecedented topology-flexibility, lower complexity and higher energy efficiency. Title Helios: a hybrid electrical/optical switch architecture for modular data centers Abstract The basic building block of ever larger data centers has shifted from a rack to a modular container with hundreds or even thousands of servers. Delivering scalable bandwidth among such containers is a challenge. A number of recent efforts promise full bisection bandwidth between all servers, though with significant cost, complexity, and power consumption. We present Helios, a hybrid electrical/optical switch architecture that can deliver significant reductions in the number of switching elements, cabling, cost, and power consumption relative to recently proposed data center network architectures. We explore architectural trade offs and challenges associated with realizing these benefits through the evaluation of a fully functional Helios prototype. Title Reducing NGN energy consumption with IP/SDH/WDM Abstract We analyze network node architectures for packet-based Next Generation Networks (NGNs), and show that the currently held vision of an IP/WDM network may not be the most desirable from an energy consumption point of view. We show that including a circuit-based transport layer (e.g., TDM layer), as an adjunct to the IP layer, has the potential to provide substantial energy savings for NGNs. Title Can they hear me now?: a security analysis of law enforcement wiretaps Abstract Although modern communications services are susceptible to third-party eavesdropping via a wide range of possible techniques, law enforcement agencies in the US and other countries generally use one of two technologies when they conduct legally-authorized interception of telephones and other communications traffic. The most common of these, designed to comply with the 1994 This paper analyzes the security properties of these interfaces. We demonstrate that the standard CALEA interfaces are vulnerable to a range of unilateral attacks by the intercept target. In particular, because of poor design choices in the interception architecture and protocols, our experiments show it is practical for a CALEA-tapped target to overwhelm the link to law enforcement with spurious signaling messages without degrading her own traffic, effectively preventing call records as well as content from being monitored or recorded. We also identify stop-gap mitigation strategies that partially mitigate some of our identified attacks. Title Preconfigured structures for survivable WDM networks Abstract Network survivability against component failure is an important issue in WDM (Wavelength Division Multiplexing) optical networks. This includes failure detection, localization and isolation (i.e. protection or restoration). In this paper, we focus on link-based failure localization and protection in WDM networks. Different preconfigured optical structures including simple, non-simple cycles and trails are discussed, and their applications in fast link failure localization and protection are studied. We generate solutions using ILPs (Integer Linear Programs) and analyze the pros and cons of each structure. Our analysis shows that simple cycles can be regarded as a special case of non-simple cycles, and both simple and non-simple cycles are special cases of trails. Therefore, trails provide the most general and flexible structure to generate the best solutions. Title Reconfigurable hybrid interconnection for static and dynamic scientific applications Abstract As we enter the era of peta-scale computing, system architects must plan for machines composed of tens or even hundreds of thousands of processors. Although fully connected networks such as fat-tree configurations currently dominate HPC interconnect designs, such approaches are inadequate for ultra-scale concurrencies due to the superlinear growth of component costs. Traditional low-degree interconnect topologies, such as 3D tori, have reemerged as a competitive solution due to the linear scaling of system components relative to the node count; however, such networks are poorly suited for the requirements of many scientific applications at extreme concurrencies. To address these limitations, we propose HFAST, a hybrid switch architecture that uses circuit switches to dynamically reconfigure lower-degree interconnects to suit the topological requirements of a given scientific application. This work presents several new research contributions. We develop an optimization strategy for HFAST mappings and demonstrate that efficiency gains can be attained across a broad range of static numerical computations. Additionally, we conduct an extensive analysis of the communication characteristics of a dynamically adapting mesh calculation and show that the HFAST approach can achieve significant advantages, even when compared with traditional fat-tree configurations. Overall results point to the promising potential of utilizing hybrid reconfigurable networks to interconnect future peta-scale architectures, for both static and dynamically adapting applications. CCS Networks Network types Cyber-physical networks CCS Networks Network types Mobile networks Title Ieee 802.11ad: introduction and performance evaluation of the first multi-gbps wifi technology Abstract Multi-Gbps communication is the next frontier in high-speed local and personal wireless technologies, which will offer the necessary foundation for a new wave of applications such as wireless display, high-speed device synchronization, and the evolution of Wi-Fi. The wide harmonized spectrum in the unlicensed millimeter-wave (60 GHz) band is considered the most prominent candidate to support the evolution towards multi-Gbps data rates. As such, the industry is in the process of defining new 60 GHz PHY and MAC technologies that can serve a wide variety of applications and usages, as to avoid the proliferation of non-coexistent devices operating in this unoccupied spectrum. The most promising activity is taking place under the auspices of the IEEE 802.11ad task group, which is defining amendments to the 802.11 standard for operation in the 60GHz band. In this paper we describe the main components of the MAC and PHY amendments included in the current IEEE 802.11ad draft standard and that enable multi-Gbps data rates. We also provide a comprehensive set of simulation results for typical use cases, and argue that 802.11ad is poised to be the standard that will enable mass market adoption of multi-Gbps wireless communication in the 60 GHz spectrum band. Title A dual-band architecture for multi-gbps communication in 60 GHz multi-hop networks Abstract By utilizing abundant spectrum available at 60 GHz, millimeter wavelength (mmWave) radios can enable multi-gigabit per second (Gbps) link rates, but only over short distances. The limited range of mmWave radios can be extended to provide high throughput coverage to an entire home or office network using multi-hop communication. In this paper, we present a dual-band architecture that leverages the significant range advantage of low-cost commodity WiFi radios to control and coordinate scheduling/routing on a 60 GHz multi-hop network. The high pathloss and directionality of mmWave radio create significant opportunities for spatial reuse in these mmWave networks. By realizing these spatial reuse gains through effective scheduling/routing, this dual-band architecture can enable multi-Gbps end-to-end throughput in 60 GHz multi-hop networks. Title Efficient codebook-based symbol-wise beamforming for millimeter-wave WPAN system Abstract In this paper, we propose an efficient codebook-based symbol-wise beamforming for millimeter-wave WPAN systems, which is based on the multi-level training and antenna selection to reduce the protocol overhead in terms of the number of required training sequences. Using our proposed antenna selection method, one beam direction of the specific level includes two beam directions of the following level, which results that the required number of training sequences at each level is constant regardless of the number of antennas. Once antennas of the transmitter and receiver are selected properly at each level, then the training sequences are exchanged with the direction specified by the pre-defined codebook. By adopting the proposed antenna selection method and multi-level training, our proposed scheme can significantly reduce the total number of required training sequences for the beamforming setup. To verify the performance, we compare our proposed scheme with the conventional symbol-wise beamforming schemes based on the codebook. It is evident from the simulations that our proposed scheme can provide the effective signal-to-noise ratio (SNR) gain approaching to the conventional scheme with fewer training sequences. Title Fast beam training for mmWave communication system: from algorithm to circuits Abstract A mmWave communication system is equipped with a large number of antennas so as to achieve higher directional gains. Phased array antenna with beam switching is widely-used to minimize the cost of the hardware implementation for the 60 GHz system. An array with beam switching is allowed only to steer at a fixed number of pre-defined angles called beams. However, since devices do not know the locations of other devices In this paper, we first present a fast beam training algorithm called beam coding. By coding the beams with directions that are steered simultaneously in a training packet, we are able to obtain the best beam pair out of 32*32 pairs for a 16-antenna system in 2-4 training packets only, independent of traffic load in the network. It outperforms the best existing scheme proposed in IEEE 802.15.3c standard which requires at least 30 packets to complete the full training in a 16-antenna system and is dependent on traffic load. However, two-bit phase quantization is employed in current phase-shifter designs. It distorts the coded beam pattern in our scheme, leading to imprecise beam training in some cases. Enhancements of the scheme can be performed at algorithmic level but increase the burden in the protocol design. Our simulation demonstrates that by implementing 3-bit phase quantization in a 16-antenna system, the SNR loss is reduced from 2 dB to 0.5 dB. In view of this, to support the new protocol, we propose a new approach to architect a higher resolution beamformer. Measurement results show that it can support 7-bit phase quantization level. Title Conflict on a communication channel Abstract Imagine that Alice wants to send a message This problem abstracts many types of conflict in information networks including: jamming attacks in wireless networks and distributed denial-of-service (DDoS) attacks on the Internet, where the costs to Alice, Bob and Carol represent an expenditure of energy or network resources. The problem allows us to quantitatively analyze the economics of information exchange in an adversarial setting and ask: Is communication cheaper than censorship? We answer this question in the affirmative by showing that it is significantly more costly for Carol to block communication of Finally, we apply our work to two problems: (1) DoS attacks in wireless sensor networks and (2) application-level DDoS attacks in a wired client-server scenario. Our applications show how our results can provide an additional tool in mitigating such attacks. Title On the feasibility of spatial multiplexing for indoor 60 GHz communication Abstract We investigate spatial multiplexing in the sparse multipath environment characteristic of beamsteered indoor 60 GHz links. The small carrier wavelength implies that large spatial multiplexing gains are available even under line of sight (LOS) conditions for nodes with form factors compatible with consumer electronics devices such as set-top boxes and television sets. We present a transceiver architecture that provides both highly directive beams and spatial multiplexing, and model its performance for a typical in-room communication link. We evaluate the performance of a simple scheme using a fixed constellation, transmit beamsteering and MMSE reception. The performance is benchmarked against transmit precoding along the channel eigenmodes without a constellation constraint. We observe that, for a relatively small transmit power per antenna element (achievable, for example, in low-cost CMOS processes), the spatial multiplexing gain is robust to LOS blockage and to variations in the relative locations of the transmitter and receiver in the room. Title OFDMA in the field: current and future challenges Abstract OFDMA will be the predominant technology for the air interface of broadband mobile wireless systems for the next decades. In recent years, OFDMA-based networks based on IEEE 802.16, and increasingly also on 3GPP LTE are rolled out for commercial use. This article gives an overview of the main challenges for the deployment and operation of state-of-the-art OFDMA networks, along with an outlook into future developments for 4G and beyond 4G networks. Title Papyrus: a software platform for distributed dynamic spectrum sharing using SDRs Abstract Proliferation and innovation of wireless technologies require significant amounts of radio spectrum. Recent policy reforms by the FCC are paving the way by freeing up spectrum for a new generation of frequency-agile wireless devices based on software defined radios (SDRs). But despite recent advances in SDR hardware, research on SDR MAC protocols or applications requires an experimental platform for managing physical access. We introduce Papyrus, a software platform for wireless researchers to develop and experiment dynamic spectrum systems using currently available SDR hardware. Papyrus provides two fundamental building blocks at the physical layer: flexible non-contiguous frequency access and simple and robust frequency detection. Papyrus allows researchers to deploy and experiment new MAC protocols and applications on USRP GNU Radio, and can also be ported to other SDR platforms. We demonstrate the use of Papyrus using Jello, a distributed MAC overlay for high-bandwidth media streaming applications and Ganache, a SDR layer for adaptable guardband configuration. Full implementations of Papyrus and Jello are publicly available. Title Unbounded contention resolution in multiple-access channels Abstract Recent work on shared-resource contention resolution has yielded fruitful results for local area networks and radio networks, although either the solution is suboptimal [2] or a (possibly loose) upper bound on the number of users needs to be known [5]. In this work, we present the first (two) protocols for contention resolution in radio networks that are asymptotically optimal (with high probability), work without collision detection, and do not require information about the number of contenders. In addition to the theoretical analysis, the protocols are evaluated and contrasted with the previous work by extensive simulations. Title Effect of device mobility and phased array antennas on 60 GHz wireless networks Abstract This paper investigates the relationship among the device mobility, types of array antennas, and 60 GHz wireless link quality. For the device mobility, linear and circular motions are modeled. A linear, a rectangular, and a square array antenna are designed and used for the simulations to quantify the 60 GHz link quality together with the device mobility. The simulation results show that the linear array antenna may need more frequent re-beamforming due to the device mobility than the rectangular or the square array antenna. The simulation results also show that the circular motion of the device may break the link more frequently than the linear motion and thus this need to be taken into account for the protocol and 60 GHz system designs. CCS Networks Network types Overlay and other logical network structures CCS Networks Network types Wireless access networks CCS Networks Network types Ad hoc networks CCS Networks Network types Public Internet Title "Network Neutrality": the meme, its cost, its future Abstract In June 2011 I participated on a panel on network neutrality hosted at the June cybersecurity meeting of the DHS/SRI Infosec Technology Transition Council (ITTC), where "experts and leaders from the government, private, financial, IT, venture capitalist,and academia and science sectors came together to address the problem of identity theft and related criminal activity on the Internet." I recently wrote up some of my thoughts on that panel, including what network neutrality has to do with cybersecurity. Title Internet and the Erlang formula Abstract We demonstrate that the Internet has a formula linking demand, capacity and performance that in many ways is the analogue of the Erlang loss formula of telephony. Surprisingly, this formula is none other than the Erlang delay formula. It provides an upper bound on the probability a flow of given peak rate suffers degradation when bandwidth sharing is max-min fair. Apart from the flow rate, the only relevant parameters are link capacity and overall demand. We explain why this result is valid under a very general and realistic traffic model and discuss its significance for network engineering. Title Rehoming edge links for better traffic engineering Abstract Traditional traffic engineering adapts the routing of traffic within the network to maximize performance. We propose a new approach that also adaptively changes where traffic enters and leaves the network---changing the "traffic matrix", and not just the intradomain routing configuration. Our approach does not affect traffic patterns and BGP routes seen in neighboring networks, unlike conventional inter-domain traffic engineering where changes in BGP policies shift traffic and routes from one edge link to another. Instead, we capitalize on recent innovations in edge-link migration that enable seamless rehoming of an edge link to a different internal router in an ISP backbone network---completely transparent to the router in the neighboring domain. We present an optimization framework for traffic engineering with migration and develop algorithms that determine which edge links should migrate, where they should go, and how often they should move. Our experiments with Internet2 traffic and topology data show that edge-link migration allows the network to carry 18.8% more traffic (at the same level of performance) over optimizing routing alone. Title Workshop on internet economics (WIE2011) report Abstract The second Workshop on Internet Economics [2], hosted by CAIDA and Georgia Institute of Technology on December 1-2, 2011, brought together network technology and policy researchers with providers of commercial Internet facilities and services (network operators) to further explore the common objective of framing an agenda for the emerging but empirically stunted field of Internet infrastructure economics. This report describes the workshop discussions and presents relevant open research questions identified by its participants. Title Exploring mobile/WiFi handover with multipath TCP Abstract Mobile Operators see an unending growth of data traffic generated by their customers on their mobile data networks. As the operators start to have a hard time carrying all this traffic over 3G or 4G networks, offloading to WiFi is being considered. Multipath TCP (MPTCP) is an evolution of TCP that allows the simultaneous use of multiple interfaces for a single connection while still presenting a standard TCP socket API to the application. The protocol specification of Multipath TCP has foreseen the different building blocks to allow transparent handover from WiFi to 3G back and forth. In this paper we experimentally prove the feasibility of using MPTCP for mobile/WiFi handover in the current Internet. Our experiments run over real WiFi/3G networks and use our Linux kernel implementation of MPTCP that we enhanced to better support handover. We analyze MPTCP's energy consumption and handover performance in various operational modes. We find that MPTCP enables smooth handovers offering reasonable performance even for very demanding applications such as VoIP. Finally, our experiments showed that lost MPTCP control signals can adversely affect handover performance; we implement and test a simple but effective solution to this issue. Title Choice as a principle in network architecture Abstract There has been a great interest in defining a new network architecture that can meet the needs of a future Internet. One of the main challenges in this context is how to realize the many different technical solutions that have developed in recent years in a single coherent architecture. In addition, it is necessary to consider how to ensure economic viability of architecture solutions. In this work, we discuss how to design a network architecture where choices at different layers of the protocol stack are explicitly exposed to users. This approach ensures that innovative technical solutions can be used and rewarded, which is essential to encourage wide deployment of this architecture. Title Multi-resource fair queueing for packet processing Abstract Middleboxes are ubiquitous in today's networks and perform a variety of important functions, including IDS, VPN, firewalling, and WAN optimization. These functions differ vastly in their requirements for hardware resources ( Title OpenFlow: a radical new idea in networking Abstract An open standard that enables software-defined networking. NA OpenFlow is a concept from emerging software-defined networking (SDN) technologies, which is intended to help users operate networks in a smarter way, with greater flexibility and efficiency. Compared to the traditional approach, SDN is itself a revolutionary approach, separating the network control plane and the forwarding plane. OpenFlow is shown to consume less energy compared to traditional hardware-based networking, an appealing characteristic for next-generation services and applications. Because it is programmable via an open protocol, OpenFlow is very flexible and varied in its implementations. It can perform network address translation (NAT) tasks such as rewriting packets, dropping packets as a firewall might, and keeping the network healthy by load balancing the packet flows. The main contribution of this article is to introduce, in a practical way, the emerging OpenFlow concept as a key enabler for a wide range of applications and services for next-generation networks. Besides these main ideas, the author also proposes practical examples of applications and services using OpenFlow, such as in bandwidth management, tenantized networking, and game servers. The article also contributes to debates on whether the OpenFlow paradigm is an evolution or devolution in networking technologies, with an analogy to the introduction of app stores in smartphone history. Furthermore, the author warns against oversimplification, which may complicate OpenFlow implementation due to networking complexity issues such as redundancy and failover mechanisms. The author has successfully introduced a complicated new concept in networking in plain yet precise words. For those interested in networking, this article is definitely worth reading and contemplating. Title Evaluation of a multi-hop airborne ip backbone with heterogeneous radio technologies Abstract In recent years, there has been increasing interest in the DoD to build an on-demand airborne network for communications relay utilizing high capacity, long-range military radio systems. While these systems operate well in a network of homogeneous systems, platforms generally employ multiple heterogeneous radio systems making internetworking difficult due to varying radio characteristics and lack of interoperability. Although simulations and emulation tests can provide a baseline for how systems will perform in a controlled environment, field-tests are crucial to demonstrate capabilities in real-world operating environments. In this paper, we present measurement results from a field test involving two airborne platforms forming a dynamically routed aerial IP backbone over 200 nautical mile (Nm) with various radio systems as part of the C4ISR 2010 exercise. We present measurements results on per link performance, radio-to-router interface performance, and multi-hop network performance results with prototype software on open source platforms. Title Stable and efficient pricing for inter-domain traffic forwarding Abstract We address the question of strategic pricing of inter-domain traffic forwarding services provided by ISPs, which is also closely coupled with the question of how ISPs route their traffic towards their neighboring ISPs. Posing this question as a non-cooperative game between neighboring ISPs, we study the properties of this pricing game in terms of the existence and efficiency of the equilibrium. We observe that for "well-provisioned" ISPs, Nash equilibrium prices exist and they result in flows that maximize the overall network utility (generalized end-to-end throughput). For general ISP topologies, equilibrium prices may not exist; however, simulations on a large number of realistic topologies show that best-response based simple price update solutions converge to stable and efficient prices and flows for most topologies. CCS Networks Network types Packet-switching networks Title Adaptive forwarding in named data networking Abstract In Named Data Networking (NDN) architecture, packets carry data names rather than source or destination addresses. This change of paradigm leads to a new data plane: data consumers send out Interest packets, routers forward them and maintain the state of pending Interests, which is used to guide Data packets back to the consumers. NDN routers' forwarding process is able to detect network problems by observing the two-way traffic of Interest and Data packets, and explore multiple alternative paths without loops. This is in sharp contrast to today's IP forwarding process which follows a single path chosen by the routing process, with no adaptability of its own. In this paper we outline the design of NDN's adaptive forwarding, articulate its potential benefits, and identify open research issues. Title Exploit the known or explore the unknown?: hamlet-like doubts in ICN Abstract Most Information Centric Networking designs propose the usage of widely distributed in-network storage. However, the huge amount of content exchanged in the Internet, and the volatility of content replicas cached across the network pose significant challenges to the definition of a scalable routing protocol able to address all available copies. In addition, the number of available copies of a given content item and their distribution among caches is clearly impacted by the request forwarding policy. In this paper we gather initial design considerations for an ICN request forwarding strategy by spanning over two extremes: a deterministic Title Efficiently migrating stateful middleboxes Abstract Title Using CPU as a traffic co-processing unit in commodity switches Abstract Commodity switches are becoming increasingly important as they are the basic building blocks for the enterprise and data center networks. With the availability of all-in-one switching ASICs, these switches almost universally adopt single switching ASIC design. However, such design also brings two major limitations, Title Hey, you darned counters!: get off my ASIC! Abstract Software-Defined Networking (SDN) gains much of its value through the use of central controllers with global views of dynamic network state. To support a global view, SDN protocols, such as OpenFlow, expose several counters for each flow-table rule. These counters must be maintained by the data plane, which is typically implemented in hardware as an ASIC. ASIC-based counters are inflexible, and cannot easily be modified to compute novel metrics. These counters do not need to be on the ASIC. If the ASIC data plane has a fast connection to a general-purpose CPU with cost-effective memory, we can replace traditional counters with a stream of rule-match records, transmit this stream to the CPU, and then process the stream in the CPU. These Title Optimal queue-size scaling in switched networks Abstract We consider a switched (queueing) network in which there are constraints on which queues may be served simultaneously; such networks have been used to effectively model input-queued switches and wireless networks. The scheduling policy for such a network specifies which queues to serve at any point in time, based on the current state or past history of the system. In the main result of this paper, we provide a new class of online scheduling policies that achieve optimal average queue-size scaling for a class of switched networks including input-queued switches. In particular, it establishes the validity of a conjecture about optimal queue-size scaling for input-queued switches. Title Channel width assignment using relative backlog: extending back-pressure to physical layer Abstract With recent advances in Software-defined Radios (SDRs), it has indeed became feasible to dynamically adapt the channel widths at smaller time scales. Even though the advantages of varying channel width (e.g. higher link throughput with higher width) have been explored before, as with most of the physical layer settings (rate, transmission power etc.), naively configuring channel widths of links can in fact have negative impact on wireless network performance. In this paper, we design a cross-layer channel width assignment scheme that adapts the width according to the backlog of link-layer queues. We leverage the benefits of varying channel widths while adhering to the invariants of back-pressure utility maximization framework. The presented scheme not only guarantees improved throughput and network utilization but also ensures bounded buffer occupancy and fairness. Title Performance evaluation of the random replacement policy for networks of caches Abstract Caching is a key component for Content Distribution Networks and new Information-Centric Network architectures. In this paper, we address performance issues of caching networks running the RND replacement policy. We first prove that when the popularity distribution follows a general power-law with decay exponent α > 1, the miss probability is asymptotic to O( C Title A reconfigurable optical/electrical interconnect architecture for large-scale clusters and datacenters Abstract Hybrid optical/electrical interconnects, using commercially available optical circuit switches at the core part of the network, have been recently proposed as an attractive alternative to fully-connected electronically-switched networks in terms of port density, bandwidth/port, cabling and energy efficiency. Although the shift from a traditionally packet-switched core to switching between server aggregations (or servers) at circuit granularity requires system redesign, the approach has been shown to fit well to the traffic requirements of certain classes of high-performance computing applications, as well as to the traffic patterns exhibited by typical data center workloads. Recent proposals for such system designs have looked at small/medium scale hybrid interconnects. In this paper, we present a hybrid optical/electrical interconnect architecture intended for large-scale deployments of high-performance computing systems and server co-locations. To reduce complexity, our architecture employs a regular shuffle network topology that allows for simple management and cabling. Thanks to using a single-stage core interconnect and multiple optical planes, our design can be both incrementally scaled up (in capacity) and scaled out (in the number of racks) without requiring major re-cabling and network re-configuration. Also, we are the first to our knowledge to explore the benefit of using multi-hopping in the optical domain as a means to avoid constant reconfiguration of optical circuit switches. We have prototyped our architecture at packet-level detail in a simulation framework to evaluate this concept. Our results demonstrate that our hybrid interconnect, by adapting to the changing nature of application traffic, can significantly exceed the throughput of a static interconnect of equal degree, while at times attaining a throughput comparable to that of a costly fully-connected network. We also show a further benefit brought by multi-hopping, that it reduces the performance drops by reducing the frequency of reconfiguration. Title CONNECT: re-examining conventional wisdom for designing nocs in the context of FPGAs Abstract An FPGA is a peculiar hardware realization substrate in terms of the relative speed and cost of logic vs. wires vs. memory. In this paper, we present a Network-on-Chip (NoC) design study from the mindset of NoC as a synthesizable infrastructural element to support emerging System-on-Chip (SoC) applications on FPGAs. To support our study, we developed CONNECT, an NoC generator that can produce synthesizable RTL designs of FPGA-tuned multi-node NoCs of arbitrary topology. The CONNECT NoC architecture embodies a set of FPGA-motivated design principles that uniquely influence key NoC design decisions, such as topology, link width, router pipeline depth, network buffer sizing, and flow control. We evaluate CONNECT against a high-quality publicly available synthesizable RTL-level NoC design intended for ASICs. Our evaluation shows a significant gain in specializing NoC design decisions to FPGAs' unique mapping and operating characteristics. For example, in the case of a 4x4 mesh configuration evaluated using a set of synthetic traffic patterns, we obtain comparable or better performance than the state-of-the-art NoC while reducing logic resource cost by 58%, or alternatively, achieve 3-4x better performance for approximately the same logic resource usage. Finally, to demonstrate CONNECT's flexibility and extensive design space coverage, we also report synthesis and network performance results for several router configurations and for entire CONNECT networks. CCS Software and its engineering Software organization and properties Contextual software domains CCS Software and its engineering Software organization and properties Software system structures CCS Software and its engineering Software organization and properties Software functional properties CCS Software and its engineering Software organization and properties Extra-functional properties CCS Software and its engineering Software notations and tools General programming languages CCS Software and its engineering Software notations and tools Formal language definitions CCS Software and its engineering Software notations and tools Compilers CCS Software and its engineering Software notations and tools Context specific languages CCS Software and its engineering Software notations and tools System description languages CCS Software and its engineering Software notations and tools Development frameworks and environments CCS Software and its engineering Software notations and tools Software configuration management and version control systems Title From feature models to decision models and back again an analysis based on formal transformations Abstract In Software Product Line Engineering, variability modeling plays a crucial rule. Over the years, a couple of different modeling paradigms with a plethora of different approaches have been proposed. However, only little attention was spent to compare these concepts. In this paper, we compare the capabilities and expressiveness of basic feature modeling with basic decision modeling. In this paper, we also present a formalization of basic decision modeling and show that in combination with a powerful constraint language both approaches are equivalent, while in their very basic forms they are not equivalent. These results can be used to transfer existing research results between the two paradigms. Title Enablers and inhibitors for speed with reuse Abstract An open issue in industry is software reuse in the context of large scale Agile product development. The speed offered by agile practices is needed to hit the market, while reuse is needed for long-term productivity, efficiency, and profit. The paper presents an empirical investigation of factors influencing speed and reuse in three large product developing organizations seeking to implement Agile practices. The paper identifies, through a multiple case study with 3 organizations, 114 business-, process-, organizational-, architecture-, knowledge- and communication factors with positive or negative influences on reuse, speed or both. Contributions are a categorized inventory of influencing factors, a display for organizing factors for the purpose of process improvement work, and a list of key improvement areas to address when implementing reuse in organizations striving to become more Agile. Categories identified include good factors with positive influences on reuse or speed, harmful factors with negative influences, and complex factors involving inverse or ambiguous relationships. Key improvement areas in the studied organizations are intra-organizational communication practices, reuse awareness and practices, architectural integration and variability management. Results are intended to support process improvement work in the direction of Agile product development. Feedback on results from the studied organizations has been that the inventory captures current situations, and is useful for software process improvement work. Title Introduction to software product lines Abstract This tutorial introduces the essential activities and underlying practice areas of software product line development. It reviews the basic concepts of software product lines, discusses the costs and benefits of product line adoption, introduces the SEI's Framework for Software Product Line Practice, and describes approaches to applying the practices of the framework. Title A case study on variability in user interfaces Abstract Software Product Lines (SPL) enable efficient derivation of products. SPL concepts have been applied successfully in many domains including interactive applications. However, the user interface (UI) part of applications has barely been addressed yet. While standard SPL concepts allow derivation of Title Author order: what science can learn from the arts Abstract Some thoughts about author order in research papers. Title How variation changes when an embedded product ceases to be embedded? Abstract This talk focuses on the change in smartphone industry. The role of applications and services has increased so much that smartphone product families no longer behave like embedded product families. Product variation now happens mostly after purchase and successful product families are much smaller than before Title A dynamic reputation system with built-in attack resilience to safeguard buyers in e-market Abstract Reputation systems aim to reduce the risk of loss due to untrustworthy participants by providing a mechanism for establishing trustworthiness between mutually unknown online entities in an information asymmetric e-market. These systems encourage honest behavior and discourage malicious behavior of buyer and seller agents by laying a foundation for security and stability in the e-market. However, the success of a reputation system depends on its built-in resilience capabilities to foil various attacks. This paper focuses on how to safeguard buyers from dishonest sellers and advisors by incorporating an attack resilient reputation computation methodology. The objectives of the proposed dynamic reputation system in the distributed environment are to reduce the incentive for behaving dishonestly, and to minimize harm in case of attacks by dishonest participants with the inherent purpose of improving the quality of services in the e-market. Title Multiobjective optimization for project portfolio selection Abstract This paper proposes a multiobjective heuristic search approach to support a project portfolio selection technique on scenarios with a large number of candidate projects. The original formulation for the technique requires analyzing all combinations of candidate projects, which is unfeasible when more than a few alternatives are available. We have used a multiobjective genetic algorithm to partially explore the search space of project combinations and select the most effective ones. We present an experimental study based on four project selection problems that compares the results found by the genetic algorithm to those yielded by a non-systematic search procedure. Results show evidence that the project selection technique can be used in large-scale scenarios and that GA presents better results than simpler search strategy. Title Google's hybrid approach to research Abstract By closely connecting research and development Google is able to conduct experiments on an unprecedented scale, often resulting in new capabilities for the company. Title Position paper: approach for architectural design and modelling with documented design decisions (ADMD3) Abstract Documented design decisions simplify the evolution of software systems. However, currently design decisions are often either badly documented or are not documented at all. Relations between requirements, decisions, and architectural elements are missing, and architecture alternatives are not preserved. As a consequence it is hard to identify deprecated design solutions when requirements change In this position paper, we present an approach to document software architecture design decisions, together with related requirements and related architectural elements, through the goal-driven elicitation of those requirements needed to make a design decision. Therefore, we propose a process model and supporting meta-models, including a meta-model for a pattern catalogue. The speciality of this pattern catalogue is the inclusion of questions to drive requirements engineering to validate pattern selections, and to guide choosing the most appropriate pattern variant. The paper concludes with a discussion on the assumptions of the approach and possible approaches to empirical validation. CCS Software and its engineering Software notations and tools Software libraries and repositories Title Gardening Tips Abstract Title Torchvision the machine-vision package of torch Abstract This paper presents Torchvision an open source machine vision package for Torch. Torch is a machine learning library providing a series of the state-of-the-art algorithms such as Neural Networks, Support Vector Machines, Gaussian Mixture Models, Hidden Markov Models and many others. Torchvision provides additional functionalities to manipulate and process images with standard image processing algorithms. Hence, the resulting images can be used directly with the Torch machine learning algorithms as Torchvision is fully integrated with Torch. Both Torch and Torchvision are written in C++ language and are publicly available under the Free-BSD License. Title Data visualization of teen birth rate data using freely available rapid prototyping tools Abstract Making sense of large data sets can be challenging without visual aids. The purpose of this project was to use freely available, web-based tools to rapidly visualize and enable the exploration of relationships within a complex data set. The data set leveraged was teen birth statistics in Texas from 2001 -- 2004. This information is used by researchers and public health administrators for public health decision making. Current tools used to explore this data set are table driven and are difficult to use. Using data presented in Exhibit, an open-source publishing framework, end users were able to successfully explore a complex data set. Given the users' enthusiastic response to the displays, we conclude that this tool is appropriate and useful for this purpose. The relatively low cost and effort to set up and maintain this display makes it ideal for organizations with low budgets and limited resources, but with a need to analyze complex data sets. Title Remote healthcare delivery with sqwelch Abstract In this paper, we describe the architecture of Sqwelch, and show how it can be applied in the delivery of healthcare to patients in the community and to support their caregiver networks. We show, through a detailed scenario, how personalized web applications can be composed by non technical users on our website: www.sqwelch.com (on which we have a demonstration video). Sqwelch uses lightweight semantics to effect mediation between heterogeneous components. Title Recommending API methods based on identifier contexts Abstract Reuse recommendation systems suggest functions or code snippets that are useful for the programming task at hand within the IDE. These systems utilize different aspects from the context of the cursor position within the source file being edited for inferring which functionality is needed next. Current approaches are based on structural information like inheritance relations or type/method usages. We propose a novel method that utilizes the knowledge embodied in the identifiers as a basis for the recommendation of API methods. This approach has the advantage that relevant recommendations can also be made in cases where no methods are called in the context or if contexts use distinct but semantically similar types or methods. First experiments show, that the correct method is recommended in about one quarter to one third of the cases. Title Finding web services via BPEL fragment search Abstract The development of service-oriented systems (SOS) is based on searching for services that are to be used. Much work has been done on finding individual services, and recently, work has also been done on searching for services by first searching for similar SOS, i.e., those having similar processes. But such work has focused on finding the entire process of an SOS. The developer may only want part of a process, but current work do not explicitly support it. This paper takes an approach of finding services by first finding process fragments. We take BPEL as an example of a behavioral process model that describes an SOS. We describe our approach to searching for BPEL fragments. Title Expressing multi-way data-flow constraint systems as a commutative monoid makes many of their properties obvious Abstract Here multi-way data-flow constraints systems are viewed as Title Generic conversions of abstract syntax representations Abstract In this paper we present a datatype-generic approach to syntax with variable binding. A universe specifies the binding and scoping structure of object languages, including binders that bind multiple variables as well as sequential and recursive scoping. Two interpretations of the universe are given: one based on parametric higher-order abstract syntax and one on well-typed de Bruijn indices. The former provides convenient interfaces to embedded domain-specific languages, but is awkward to analyse and manipulate directly, while the latter is a convenient representation in implementations, but is unusable as a surface language. We show how to generically convert from the parametric HOAS interpretation to the de Bruijn interpretation thereby taking the pain from DSL developer to write the conversion themselves. Title A generic abstract syntax model for embedded languages Abstract Representing a syntax tree using a data type often involves having many similar-looking constructors. Functions operating on such types often end up having many similar-looking cases. Different languages often make use of similar-looking constructions. We propose a generic model of abstract syntax trees capable of representing a wide range of typed languages. Syntactic constructs can be composed in a modular fashion enabling reuse of abstract syntax and syntactic processing within and across languages. Building on previous methods of encoding extensible data types in Haskell, our model is a pragmatic solution to Wadler's "expression problem". Its practicality has been confirmed by its use in the implementation of the embedded language Feldspar. Title Management and operation of a software product line Abstract The tutorial will be driven by a set of scenarios that will set the context for several threads of discussion. These scenarios will reflect different domains and organizational types. Each of the scenarios will illustrate an aspect of one or more of the topics 1--6 listed above. For example, John McGregor has had experience with product lines in which there are many suppliers of core assets and multiple independent builders of products. A scenario like this would provide the context for topic number 4, communication. 1. Managing the scope and variation over time 2. The core asset base as a platform 3. Refreshing the technical assets 4. Communication among customers, product developers, and core asset developers 5. Iterative, incremental business cases for experiments and innovation 6. Producing products efficiently and effectively CCS Software and its engineering Software notations and tools Software maintenance tools CCS Software and its engineering Software creation and management Designing software CCS Software and its engineering Software creation and management Software development process management CCS Software and its engineering Software creation and management Software development techniques CCS Software and its engineering Software creation and management Software verification and validation CCS Software and its engineering Software creation and management Software post-development issues CCS Software and its engineering Software creation and management Collaboration in software development CCS Software and its engineering Software creation and management Search-based software engineering CCS Theory of computation Models of computation Computability CCS Theory of computation Models of computation Probabilistic computation Title Extractors and Lower Bounds for Locally Samplable Sources Abstract We consider the problem of extracting randomness from sources that are efficiently samplable, in the sense that each output bit of the sampler only depends on some small number Using our result, we also improve a result of Viola [2010] who proved a 1/2 − Title Fast computation of small cuts via cycle space sampling Abstract We describe a new sampling-based method to determine cuts in an undirected graph. For a graph ( In the model of distributed computing in a graph In the model of parallel computing on the EREW PRAM, our approach yields a simple algorithm with optimal time complexity Title Kolmogorov Complexity in Randomness Extraction Abstract We clarify the role of Kolmogorov complexity in the area of randomness extraction. We show that a computable function is an almost randomness extractor if and only if it is a Kolmogorov complexity extractor, thus establishing a fundamental equivalence between two forms of extraction studied in the literature: Kolmogorov extraction and randomness extraction. We present a distribution Title Pseudorandom generators for group products: extended abstract Abstract We prove that the pseudorandom generator introduced by Impagliazzo et al. (1994) with proper choice of parameters fools group products of a given finite group G. The seed length is O((|G| Title Optimal exploration of small rings Abstract In [4], the authors look at Here we close the question of optimal ( Title Fast algorithms for finding matchings in lopsided bipartite graphs with applications to display ads Abstract We derive efficient algorithms for both detecting and representing matchings in lopsided bipartite graphs; such graphs have so many nodes on one side that it is infeasible to represent them in memory or to identify matchings using standard approaches. Detecting and representing matchings in lopsided bipartite graphs is important for allocating and delivering guaranteed-placement display ads, where the corresponding bipartite graph of interest has nodes representing advertisers on one side and nodes representing web-page impressions on the other; real-world instances of such graphs can have billions of impression nodes. We provide theoretical guarantees for our algorithms, and in a real-world advertising application, we demonstrate the feasibility of our detection algorithms. Title EDA-RL: estimation of distribution algorithms for reinforcement learning problems Abstract By making use of probabilistic models, (EDAs) can outperform conventional evolutionary computations. In this paper, EDAs are extended to solve reinforcement learning problems which arise naturally in a framework for autonomous agents. In reinforcement learning problems, we have to find out better policies of agents such that the rewards for agents in the future are increased. In general, such a policy can be represented by conditional probabilities of the agents' actions, given the perceptual inputs. In order to estimate such a conditional probability distribution, Conditional Random Fields (CRFs) by Lafferty et al. is newly introduced into EDAs in this paper. The reason for adopting CRFs is that CRFs are able to learn conditional probabilistic distributions from a large amount of input-output data, i.e., episodes in the case of reinforcement learning problems. On the other hand, conventional reinforcement learning algorithms can only learn incrementally. Computer simulations of Probabilistic Transition Problems and Perceptual Aliasing Maze Problems show the effectiveness of EDA-RL. Title A Simple Proof of Bazzi’s Theorem Abstract Linial and Nisan [1990] asked if any polylog-wise independent distribution fools any function in AC Title A logical characterization of the counting hierarchy Abstract In this article we give a logical characterization of the counting hierarchy. The counting hierarchy is the analogue of the polynomial hierarchy, the building block being Probabilistic polynomial time PP instead of NP. We show that the extension of first-order logic by second-order majority quantifiers of all arities describes exactly the problems in the counting hierarchy. We also consider extending the characterization to general proportional quantifiers Title Eliciting an overlooked aspect of Bayesian reasoning Abstract Bayesian theorem is the theoretical basis of uncertainty management as well as the stochastic foundation for forecast-oriented expert systems. Mathematically, the reasoning steps can be represented by a sequence of probabilistic computations. To reduce the mathematical complexity and make it mentally manageable, an assumption, known as Bayesian Assumption, is usually made. This assumption does simplify the computation, but also introduces errors to the computation and makes it distorted from the real probabilistic result. In this paper, I use Venn diagrams to discuss the distortion being introduced to the result by showing cases from CCS Theory of computation Models of computation Quantum computation theory CCS Theory of computation Models of computation Interactive computation Title NASA: achieving lower regrets and faster rates via adaptive stepsizes Abstract The classic Stochastic Approximation (SA) method achieves optimal rates under the black-box model. This optimality does not rule out better algorithms when more information about functions and data is available. We present a family of Title A Primal-Dual Randomized Algorithm for Weighted Paging Abstract We study the weighted version of the classic online paging problem where there is a weight (cost) for fetching each page into the cache. We design a randomized Our solution is based on a two-step approach. We first obtain an Title Property networks allowing oracle-based mode-change propagation in hierarchical components Abstract Strong pressure on deployment of embedded control systems on a low-cost hardware leads to the need of optimizing software architectures to minimize resource demands. Nevertheless, releasing the resources not needed in specific phases of system execution is only rarely supported by todays component frameworks, mainly since information about the system state is spread over several components, which makes the idea hard to implement. The paper introduces a formal model of Title Online matching with concave returns Abstract We consider a Our algorithm is based on the primal-dual paradigm and makes use of convex programming duality. The upper bounds are obtained by formulating the task of finding the right counterexample as an optimization problem. This path takes us through the calculus of variations which deals with optimizing over continuous functions. The algorithm and the upper bound are related to each other via a set of differential equations, which points to a certain kind of duality between them. Title Quantum interactive proofs with weak error bounds Abstract This paper proves that the computational power of quantum interactive proof systems, with a double-exponentially small gap in acceptance probability between the completeness and soundness cases, is precisely characterized by EXP, the class of problems solvable in exponential time by deterministic Turing machines. This fact, and our proof of it, has implications concerning quantum and classical interactive proof systems in the setting of unbounded error that include the following: • Quantum interactive proof systems are strictly more powerful than their classical counterparts in the unbounded-error setting unless PSPACE = EXP, as even unbounded error classical interactive proof systems can be simulated in PSPACE. • The recent proof of Jain, Ji, Upadhyay, and Watrous (STOC 2010) establishing QIP = PSPACE relies heavily on the fact that the quantum interactive proof systems defining the class QIP have bounded error. Our result implies that some nontrivial assumption on the error bounds for quantum interactive proofs is unavoidable to establish this result (unless PSPACE = EXP). • To prove our result, we give a quantum interactive proof system for EXP with perfect completeness and soundness error 1--2 We also study the computational power of a few other related unbounded-error complexity classes. Title Online Optimization with Uncertain Information Abstract We introduce a new framework for designing online algorithms that can incorporate additional information about the input sequence, while maintaining a reasonable competitive ratio if the additional information is incorrect. Within this framework, we present online algorithms for several problems including allocation of online advertisement space, load balancing, and facility location. Title ACM international workshop on interactive multimedia on mobile and portable devices (IMMPD'11) Abstract With the mobile and portable devices become ubiquitous for people's daily life, how to design user interfaces of these products that enable natural, intuitive and fun interaction is one of the main challenges the multimedia community is facing. Following several successful events, the ACM International workshop on Interactive Multimedia on Mobile and Portable Devices (IMMPD'11) aims to bring together researchers from both academia and industry in domains including computer vision, audio and speech processing, machine learning, pattern recognition, communications, human-computer interaction, and media technology to share and discuss recent advances in interactive multimedia. Title Near-optimal private approximation protocols via a black box transformation Abstract We show the following transformation: any two-party protocol for outputting a (1+ε)-approximation to f(x,y) = ∑ Title Constant-round non-malleable commitments from any one-way function Abstract We show unconditionally that the existence of commitment schemes implies the existence of constant-round non-malleable commitments; earlier protocols required additional assumptions such as collision resistant hash functions or subexponential one-way functions. Our protocol also satisfies the stronger notions of concurrent non-malleability and robustness. As a corollary, we establish that constant-round non-malleable zero-knowledge arguments for NP can be based on one-way functions and constant-round secure multi-party computation can be based on enhanced trapdoor permutations; also here, earlier protocols additionally required either collision-resistant hash functions or subexponential one-way functions. Title Two paradigms of composition Abstract We use a small example to discuss how two different formal modeling languages address the interaction between data and behavior using parallel composition. We use this discussion to highlight the distinction between CCS Theory of computation Models of computation Streaming models CCS Theory of computation Models of computation Concurrency CCS Theory of computation Models of computation Timed and hybrid models CCS Theory of computation Models of computation Abstract machines Title Computing bounded reach sets from sampled simulation traces Abstract This paper presents an algorithm which uses simulation traces and formal models for computing overapproximations of reach sets of deterministic hybrid systems. The implementation of the algorithm in a tool, Title SEEP: exploiting symbolic execution for energy-aware programming Abstract In recent years, there has been a rapid evolution of energyaware computing systems (e.g., mobile devices, wireless sensor nodes), as still rising system complexity and increasing user demands make energy a permanently scarce resource. While static and dynamic optimizations for energy-aware execution have been explored massively, writing energyefficient programs in the first place has only received limited attention. This paper proposes SEEP, a framework which exploits symbolic execution and platform-specific energy profiles to provide the basis for Title SEEP: exploiting symbolic execution for energy-aware programming Abstract In recent years, there has been a rapid evolution of energy-aware computing systems (e.g., mobile devices, wireless sensor nodes), as still rising system complexity and increasing user demands make energy a permanently scarce resource. While static and dynamic optimizations for energy-aware execution have been massively explored, writing energy-efficient programs in the first place has only received limited attention. This paper proposes SEEP, a framework which exploits symbolic execution and platform-specific energy profiles to provide the basis for Title New ideas track: testing mapreduce-style programs Abstract MapReduce has become a common programming model for processing very large amounts of data, which is needed in a spectrum of modern computing applications. Today several MapReduce implementations and execution systems exist and many MapReduce programs are being developed and deployed in practice. However, developing MapReduce programs is not always an easy task. The programming model makes programs prone to several MapReduce-specific bugs. That is, to produce deterministic results, a MapReduce program needs to satisfy certain high-level correctness conditions. A violating program may yield different output values on the same input data, based on low-level infrastructure events such as network latency, scheduling decisions, etc. Current MapReduce systems and tools are lacking in support for checking these conditions and reporting violations. This paper presents a novel technique that systematically searches for such bugs in MapReduce applications and generates corresponding test cases. The technique works by encoding the high-level MapReduce correctness conditions as symbolic program constraints and checking them for the program under test. To the best of our knowledge, this is the first approach to addressing this problem of MapReduce-style programming. Title SCORE: a scalable concolic testing tool for reliable embedded software Abstract Current industrial testing practices often generate test cases in a manual manner, which degrades both the effectiveness and efficiency of testing. To alleviate this problem, concolic testing generates test cases that can achieve high coverage in an automated fashion. One main task of concolic testing is to extract symbolic information from a concrete execution of a target program at runtime. Thus, a design decision on how to extract symbolic information affects efficiency, effectiveness, and applicability of concolic testing. We have developed a Scalable COncolic testing tool for REliable embedded software (SCORE) that targets embedded C programs. SCORE instruments a target C program to extract symbolic information and applies concolic testing to a target program in a scalable manner by utilizing a large number of distributed computing nodes. In this paper, we describe our design decisions that are implemented in SCORE and demonstrate the performance of SCORE through the experiments on the SIR benchmarks. Title Towards systematic, comprehensive trace generation for behavioral pattern detection through symbolic execution Abstract In reverse engineering, dynamic pattern detection is accomplished by collecting execution traces and comparing them to expected behavioral patterns. The traces are collected by manually executing the program under study and therefore represent only part of all relevant program behavior. This can lead to false conclusions about the detected patterns. In this paper, we propose to generate all relevant program traces by using symbolic execution. In order to reduce the created trace data, we allow to limit the trace collection to a user-selectable subset of the statically detected pattern candidates. Title eXpress: guided path exploration for efficient regression test generation Abstract Software programs evolve throughout their lifetime undergoing various changes. While making these changes, software developers may introduce regression faults. It is desirable to detect these faults as quickly as possible to reduce the cost involved in fixing them. One existing solution is continuous testing, which runs an existing test suite to quickly find regression faults as soon as code changes are saved. However, the effectiveness of continuous testing depends on the capability of the existing test suite for finding behavioral differences across versions. To address the issue, we propose an approach, called eXpress, that conducts efficient regression test generation based on a path-exploration-based test generation (PBTG) technique, such as dynamic symbolic execution. eXpress prunes various irrelevant paths with respect to detecting behavioral differences to optimize the search strategy of a PBTG technique. As a result, the PBTG technique focuses its efforts on regression test generation. In addition, eXpress leverages the existing test suite (if available) for the original version to efficiently execute the changed code regions of the program and infect program states. Experimental results on 67 versions (in total) of four programs (two from the subject infrastructure repository and two from real-world open source projects) show that, using eXpress, a state-of-the-art PBTG technique, called Pex, requires about 36% less amount of time (on average) to detect behavioral differences than without using eXpress. In addition, Pex using eXpress detects four behavioral differences that could not be detected without using eXpress (within a time bound). Furthermore, Pex requires 67% less amount of time to find behavioral differences by exploiting an existing test suite than exploration without using the test suite. Title Symbolic execution with mixed concrete-symbolic solving Abstract Symbolic execution is a powerful static program analysis technique that has been used for the automated generation of test inputs. Directed Automated Random Testing (DART) is a dynamic variant of symbolic execution that initially uses random values to execute a program and collects symbolic path conditions during the execution. These conditions are then used to produce new inputs to execute the program along different paths. It has been argued that DART can handle situations where We propose here a technique that mitigates these previous limitations of classical symbolic execution. The proposed technique splits the generated path conditions into (a) constraints that can be solved by a decision procedure and (b) complex non-linear constraints with uninterpreted functions to represent external library calls. The solutions generated from the decision procedure are used to simplify the complex constraints and the resulting path conditions are checked again for satisfiability. We also present heuristics that can further improve our technique. We show how our technique can enable classical symbolic execution to cover paths that other dynamic symbolic execution approaches cannot cover. Our method has been implemented within the Symbolic PathFinder tool and has been applied to several examples, including two from the NASA domain. Title Database state generation via dynamic symbolic execution for coverage criteria Abstract Automatically generating sufficient database states is imperative to reduce human efforts in testing database applications. Complementing the traditional block or branch coverage, we develop an approach that generates database states to achieve advanced code coverage including boundary value coverage(BVC) and logical coverage(LC) for source code under test. In our approach, we leverage dynamic symbolic execution to examine close relationships among host variables, embedded SQL query statements, and branch conditions in source code. We then derive constraints such that data satisfying those constraints can achieve the target coverage criteria. We implement our approach upon Pex, which is a state-of-the-art DSE-based test-generation tool for .NET. Empirical evaluations on two real database applications show that our approach assists Pex to generate test database states that can effectively achieve both BVC and LC, complementing the block or branch coverage. Title Directed incremental symbolic execution Abstract The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to In this paper, we present CCS Theory of computation Formal languages and automata theory Formalisms CCS Theory of computation Formal languages and automata theory Automata over infinite objects CCS Theory of computation Formal languages and automata theory Grammars and context-free languages Title Genetic evolution of L and FL-systems for the production of rhythmic sequences Abstract Music composition with algorithms inspired by nature has led to the creation of systems that compose music with rich characteristics. Nevertheless, the complexity imposed by unsupervised algorithms may arguably be considered as undesired, especially when considering the composition of rhythms. This work examines the composition of rhythms through L and Finite L-systems (FL-systems) and presents an interpretation from grammatical to rhythmic entities that expresses the repetitiveness and diversity of the output of these systems. Furthermore, we utilize a supervised training scheme that uses Genetic Algorithms (GA) to evolve the rules of L and FL-systems, so that they may compose rhythms with certain characteristics. Simple rhythmic indicators are introduced that describe the density, pauses, self similarity, symmetry and syncopation of rhythms. With fitness evaluations based on these indicators we assess the performance of L and FL-systems and present results that indicate the superiority of the FL-system in terms of adaptability to certain rhythmic tasks. Title The effect of mathematical vs. verbal formulation for finite automata Abstract This study examines the capability of high-school students to solve problems related to the Computational model of finite deterministic automata. Specifically, we compared student's achievements when solving verbal "story like" questions to solving the similar problem but formulated mathematically. A questionnaire composed of two questions, each formulated in two ways (verbal and mathematical), was given to the students as part of an exam. Generally, average or weak students got lower grades when they had to solve verbal questions. It was also found that the gap between the student achievements in verbal vs. mathematical questions widened for weaker students and when the teaching and practicing time was reduced. The student mistakes originated from their difficulties to extract a formal language from the story and to translate constrains given in the verbal formulation of the questions correctly. It was also found that when the students were unfamiliar to the content and the context of the story they had difficulties comprehending the text. This in turn caused the student to inaccurately describe the formal language (alphabet and constrains) and thus to design an incorrect automata. Title Foundations of regular expressions in XML schema languages and SPARQL Abstract Regular expressions can be found in a wide array of technology for data processing on the web. We are motivated by two such technologies: schema languages for XML and query languages for graph-structured or linked data. Our focus is on theoretical aspects of regular expressions in these contexts. Title Symbolic analysis of network security policies using rewrite systems Abstract First designed to enable private networks to be opened up to the outside world in a secure way, the growing complexity of organizations make firewalls indispensable to control information flow within a company. The central role they hold in the security of the organization information make their management a critical task and that is why for years many works have focused on checking and analyzing firewalls. The composition of firewalls, taking into account routing rules, has nevertheless often been neglected. In this paper, we propose to specify all components of a firewall, ie filtering and translation rules, as a rewrite system. We show that such specifications allow us to handle usual problems such as comparison, structural analysis and query analysis. We also propose a formal way to describe the composition of firewalls (including routing) in order to build a whole network security policy. The properties of the obtained rewrite system are strongly related to the properties of the specified networks and thus, classical theoretical and practical tools can be used to obtain relevant security properties of the security policies. Title Delayed semantic actions in Yakker Abstract Yakker is a parser generator that supports semantic actions, lexical binding of semantic values, and speculative parsing techniques such as backtracking and context-free lookahead. To avoid executing semantic actions in speculative parses that will eventually be discarded, we divide parsing into two conceptually independent phases. In the first (early) phase, the parser explores multiple possible parse trees without executing semantic actions. The second (late) phase executes the delayed semantic actions once the first phase has determined they are necessary. Execution of the two phases can be overlapped. We structure the early phase as a transducer which maps the input language to an output language of labels. A string in the output language is a We formalize delayed semantic actions and discuss a number of practical issues involved in implementing them in Yakker, including our support for regular right part grammars and dependent parsing, the design of the data structures that support histories, and memory management techniques critical for efficient implementation. Title Regular expression containment: coinductive axiomatization and computational interpretation Abstract We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatization of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule Our axiomatization gives rise to a natural computational interpretation of regular expressions as simple types that represent parse trees, and of containment proofs as We show how to encode regular expression equivalence proofs in Salomaa's, Kozen's and Grabmayer's axiomatizations into our containment system, which equips their axiomatizations with a computational interpretation and implies completeness of our axiomatization. To ensure its soundness, we require that the computational interpretation of the coinduction rule be a hereditarily total function. Hereditary totality can be considered the mother of syn- tactic side conditions: it "explains" their soundness, yet cannot be used as a conventional side condition in its own right since it turns out to be undecidable. We discuss application of Neither regular expressions as types nor subtyping interpreted coercively are novel Title Semantics and algorithms for data-dependent grammars Abstract We present the design and theory of a new parsing engine, YAKKER, capable of satisfying the many needs of modern programmers and modern data processing applications. In particular, our new parsing engine handles (1) full scannerless context-free grammars with (2) regular expressions as right-hand sides for defining nonterminals. YAKKER also includes (3) facilities for binding variables to intermediate parse results and (4) using such bindings within arbitrary constraints to control parsing. These facilities allow the kind of data-dependent parsing commonly needed in systems applications, particularly those that operate over binary data. In addition, (5) nonterminals may be parameterized by arbitrary values, which gives the system good modularity and abstraction properties in the presence of data-dependent parsing. Finally, (6) legacy parsing libraries,such as sophisticated libraries for dates and times, may be directly incorporated into parser specifications. We illustrate the importance and utility of this rich collection of features by presenting its use on examples ranging from difficult programming language grammars to web server logs to binary data specification. We also show that our grammars have important compositionality properties and explain why such properties areimportant in modern applications such as automatic grammar induction. In terms of technical contributions, we provide a traditional high-level semantics for our new grammar formalization and show how to compile grammars into non deterministic automata. These automata are stack-based, somewhat like conventional push-down automata,but are also equipped with environments to track data-dependent parsing state. We prove the correctness of our translation of data-dependent grammars into these new automata and then show how to implement the automata efficiently using a variation of Earley's parsing algorithm. Title A structure based computer grammar to understand simple and compound English sentences Abstract In this paper we are contributing to the Natural Language Understanding phase of the broader area Natural Language Processing. This paper presents the design of computer grammar that is capable of understanding simple and compound sentences of English language efficiently. To begin the design, various possible syntactic structures of the simple and compound English sentences have been explored thoroughly. Various kinds of simple and compound sentences along with the use of different tenses as well as voices have been considered in the design. Then structural representation (SRs) of these sentences have been prepared that is discussed in the analysis section. Finally, all these SRs have been converted into computer grammar. This computer grammar can be used for any European Language with some modifications. We have also developed a test parser using some portion of the computer grammar discussed in this paper. This computer grammar can be used in many applications as elaborated in the conclusion. Title Efficient regular expression evaluation: theory to practice Abstract Several algorithms and techniques have been proposed recently to accelerate regular expression matching and enable deep packet inspection at line rate. This work aims to provide a comprehensive practical evaluation of existing techniques, extending them and analyzing their compatibility. The study focuses on two hardware architectures: memory-based ASICs and FPGAs. Title Teaching push-down automata and turing machines Abstract In this paper we present the new version of a tool to assist in teaching formal languages and automata theory. In the previous version the tool provided algorithms for regular expressions, finite automata and context free grammars. The new version can simulate as well push-down automata and Turing machines. CCS Theory of computation Formal languages and automata theory Tree languages CCS Theory of computation Formal languages and automata theory Automata extensions CCS Theory of computation Formal languages and automata theory Regular languages CCS Theory of computation Computational complexity and cryptography Complexity classes Title Dynamic Indexability and the Optimality of B-Trees Abstract One-dimensional range queries, as one of the most basic type of queries in databases, have been studied extensively in the literature. For large databases, the goal is to build an external index that is optimized for disk block accesses (or I/Os). The problem is well understood in the static case. Theoretically, there exists an index of linear size that can answer a range query in O(1 + However, the problem is still wide open in the dynamic setting, when insertions and deletions of records are to be supported. With smart buffering, it is possible to speed up updates significantly to Title The cost of fault tolerance in multi-party communication complexity Abstract Multi-party communication complexity involves distributed computation of a function over inputs held by multiple distributed players. A key focus of distributed computing research, since the very beginning, has been to tolerate crash failures. It is thus natural to ask " Whether fault-tolerant communication complexity is interesting to study largely depends on how big a difference failures make. This paper proves that the impact of failures is significant, at least for the SUM aggregation function in general networks: As our central contribution, we prove that there exists (at least) an Part of our results are obtained via a novel reduction from a new two-party problem UNIONSIZECP that we introduce. UNIONSIZECP comes with a novel Title Deterministic multi-channel information exchange Abstract In this paper, we study the information exchange problem on a set of multiple access channels: k arbitrary nodes have information they want to distribute to the entire network via a shared medium partitioned into channels. We present algorithms and lower bounds on the time and channel complexity for disseminating these k information items in a single-hop network of n nodes. More precisely, we devise a deterministic algorithm running in asymptotically optimal time O(k) using O(n Title The GCT program toward the P vs. NP problem Abstract Exploring the power and potential of geometric complexity theory. Title Indexability of 2D range search revisited: constant redundancy and weak indivisibility Abstract In the 2D Title Rational proofs Abstract We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our setting is that there no longer are "good" or "malicious" provers, but only rational ones. In essence, the Verifier has a budget c and gives the Prover a reward r ∈ [0,c] determined by the transcript of their interaction; the prover wishes to maximize his expected reward; and his reward is maximized only if he the verifier correctly learns f(x). Rational proof systems are as powerful as their classical counterparts for polynomially many rounds of interaction, but are much more powerful when we only allow a constant number of rounds. Indeed, we prove that if f ∈ #P, then f is computable by a one-round rational Merlin-Arthur game, where, on input x, Merlin's single message actually consists of sending just the value f(x). Further, we prove that CH, the counting hierarchy, coincides with the class of languages computable by a constant-round rational Merlin-Arthur game. Our results rely on a basic and crucial connection between rational proof systems and proper scoring rules, a tool developed to elicit truthful information from experts. Title Separating multilinear branching programs and formulas Abstract This work deals with the power of linear algebra in the context of multilinear computation. By linear algebra we mean algebraic branching programs (ABPs) which are known to be computationally equivalent to two basic tools in linear algebra: iterated matrix multiplication and the determinant. We compare the computational power of multilinear ABPs to that of multilinear arithmetic formulas, and prove a tight super-polynomial separation between the two models. Specifically, we describe an explicit Title Determinism versus nondeterminism with arithmetic tests and computation: extended abstract Abstract For each natural number d we consider a finite structure m Another way of formulating the theorem, in a slightly stronger form, is, that over the structures m We also show that the theorem, in both forms, remains true if the binary operation min [x Title On the virtue of succinct proofs: amplifying communication complexity hardness to time-space trade-offs in proof complexity Abstract An active line of research in proof complexity over the last decade has been the study of proof space and trade-offs between size and space. Such questions were originally motivated by practical SAT solving, but have also led to the development of new theoretical concepts in proof complexity of intrinsic interest and to results establishing nontrivial relations between space and other proof complexity measures. By now, the resolution proof system is fairly well understood in this regard, as witnessed by a sequence of papers leading up to [Ben-Sasson and Nordstrom 2008, 2011] and [Beame, Beck, and Impagliazzo 2012]. However, for other relevant proof systems in the context of SAT solving, such as polynomial calculus (PC) and cutting planes (CP), very little has been known. Inspired by [BN08, BN11], we consider CNF encodings of so-called pebble games played on graphs and the approach of making such pebbling formulas harder by simple syntactic modifications. We use this paradigm of hardness amplification to make progress on the relatively longstanding open question of proving time-space trade-offs for PC and CP. Namely, we exhibit a family of modified pebbling formulas {F_n} such that: - The formulas F_n have size O(n) and width O(1). - They have proofs in length O(n) in resolution, which generalize to both PC and CP. - Any refutation in CP or PCR (a generalization of PC) in length L and space s must satisfy s log L >≈ √[4]{n}. A crucial technical ingredient in these results is a new two-player communication complexity lower bound for composed search problems in terms of block sensitivity, a contribution that we believe to be of independent interest. Title Tight bounds for monotone switching networks via fourier analysis Abstract We prove tight size bounds on monotone switching networks for the k-clique problem, and for an explicit monotone problem by analyzing the generation problem with a pyramid structure of height h. This gives alternative proofs of the separations of m-NC from m-P and of m-NC CCS Theory of computation Computational complexity and cryptography Problems, reductions and completeness Title Using AVs to explain NP-completeness Abstract We argue that algorithm visualization techniques can be usefully applied to the teaching of NP-completeness results. On the ground of this opinion and of a quite positive preliminary student evaluation, we have thus included the visualization of four well-known NP-completeness proofs into the distribution of the AlViE algorithm visualization environment. NA Title Amplifying lower bounds by means of self-reducibility Abstract We observe that many important computational problems in NC We also show that problems with small uniform constant-depth circuits have algorithms that simultaneously have small space and time bounds. We then make use of known time-space tradeoff lower bounds to show that SAT requires uniform depth Title Logspace Reduction of Directed Reachability for Bounded Genus Graphs to the Planar Case Abstract Directed reachability (or briefly reachability) is the following decision problem: given a directed graph Title Two-query PCP with subconstant error Abstract We show that the NP-Complete language 3Sat has a PCP verifier that makes two queries to a proof of almost-linear size and achieves subconstant probability of error ϵ= As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial list includes the following: (1) 3Sat cannot be efficiently approximated to within a factor of 7/8+ (2) 3Lin cannot be efficiently approximated to within a factor of 1/2+ (3) A PCP Theorem with amortized query complexity 1 + One of the new ideas that we use is a new technique for doing the Title The myth of the folk theorem Abstract A well-known result in game theory known as "the Folk Theorem" suggests that finding Nash equilibria in repeated games should be easier than in one-shot games. In contrast, we show that the problem of finding any (approximate) Nash equilibrium for a three-player infinitely-repeated game is computationally intractable (even when all payoffs are in {-1,0,1}), unless all of PPAD can be solved in randomized polynomial time. This is done by showing that finding Nash equilibria of (k+1)-player infinitely-repeated games is as hard as finding Nash equilibria of k-player one-shot games, for which PPAD-hardness is known (Daskalakis, Goldberg and Papadimitriou, 2006; Chen, Deng and Teng, 2006; Chen, Teng and Valiant, 2007). This also explains why no computationally-efficient learning dynamics, such as the "no regret" algorithms, can be "rational" (in general games with three or more players) in the sense that, when one's opponents use such a strategy, it is not in general a best reply to follow suit. Title The NP-completeness column: Finding needles in haystacks Abstract This is the 26th edition of a column that covers new developments in the theory of NP-completeness. The presentation is modeled on that which M. R. Garey and I used in our book “Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., New York, 1979, hereinafter referred to as “[G&J].” Previous columns, the first 23 of which appeared in NA 4 Citations Title Equality of streams is a Π0 over 2-complete problem Abstract Title The NP-completeness column: The many limits on approximation Abstract NA Title Processing queries on tree-structured data efficiently Abstract Title On basing one-way functions on NP-hardness Abstract CCS Theory of computation Computational complexity and cryptography Communication complexity Title Knowledge representation in ICU communication Abstract The need to improve team communication among health care providers is imperative in order to improve quality and reduce costs. Since most patients admitted to the Intensive Care Unit (ICU) suffer life threatening adverse events [1-2], there must be an effective and efficient communication protocol that facilitates workflow among the clinical team. In this paper, we studied the significance of communication at the ICU and ways to improve it. Through literature review, we identified and analyzed current research methods in order to locate the areas that require further exploration. Based on research methods review, we proposed our methodology to further comprehend the communication framework at the ICU which enables identifying factors that enhance and limit the communication process. This research proposes that through data collection, first hand and from literature, more communication factors can be identified. Through better understanding, we aim at building a knowledge base which will serve as the foundation to our long term goal of building an ontology-driven educational tool. Such a tool will be used to educate clinicians about miscommunication issues and as a means to improve it. The ultimate goal of our research is through improving clinical communication to reduce medical errors and costs and hence, enhance patient safety. Title Tracking aggregate vs. individual gaze behaviors during a robot-led tour simplifies overall engagement estimates Abstract As an early behavioral study of what non-verbal features a robot tourguide could use to analyze a crowd, personalize an interaction and/or maintain high levels of engagement, we analyze participant gaze statistics in response to a robot tour guide's deictic gestures. There were thirty-seven participants overall split into nine groups of three to five people each. In groups with the lowest engagement levels aggregate gaze responses in response to the robot deictic gesture involved the fewest total glance shifts, least time spent looking at indicated object and no intra-participant gaze. Our diverse participants had overlapping engagement ratings within their group, and we found that a robot that tracks group rather than individual analytics could capture less noisy and often stronger trends relating gaze features to self-reported engagement scores. Thus we have found indications that aggregate group analysis captures more salient and accurate assessments of overall Title Deterministic capacity modeling for cellular channels: building blocks, approximate regions, and optimal transmission strategies Abstract One of the tools that arised in the context of capacity approximations is the Title Towards coding for maximum errors in interactive communication Abstract We show that it is possible to encode any communication protocol between two parties so that the protocol succeeds even if a (1/4-ε) fraction of all symbols transmitted by the parties are corrupted adversarially, at a cost of increasing the communication in the protocol by a constant factor (the constant depends on epsilon). This encoding uses a constant sized alphabet. This improves on an earlier result of Schulman, who showed how to recover when the fraction of errors is bounded by 1/240. We also show how to simulate an arbitrary protocol with a protocol using the binary alphabet, a constant factor increase in communication and tolerating a (1/8-ε) fraction of errors. Title Complexity of fairness constraints for the Dolev-Yao attacker model Abstract Liveness properties do, in general, not hold in the Dolev-Yao attacker model, unless we assume that certain communication channels are resilient, i.e. they do not lose messages. The resilient channels assumption can be seen as a fairness constraint for the Dolev-Yao attacker model. Here we study the complexity of expressing such fairness constraints for the most common interpretation of the Dolev-Yao model, in which the attacker is the communication medium. We give reference models which describe how resilient channels behave, with unbounded and bounded communication buffers. Then we show that, for checking liveness security requirements, any fairness constraint that makes this common interpretation of the Dolev-Yao model sound and complete w.r.t. the unbounded (resp. bounded) reference model is not an ω-regular (resp. locally testable) language. These results stem from the complexity of precisely capturing the behavior of resilient channels, and indicate that verification of liveness security requirements in this interpretation of the Dolev-Yao model cannot be automated efficiently. Title Building a chain of trust: using policy and practice to enhance trustworthy clinical data discovery and sharing Abstract Advances and significant national infrastructure investment into clinical information systems are spurring a demand for secondary use and sharing of clinical and genetic data for translational research. In this paper, we describe the need for technically leveraged policy models and governance strategies to support data sharing between a range of disparate stakeholders where trust is not easily established or maintained. Title A two-layered model for scalable, heterogeneous group communications Abstract In peer-to-peer (P2P) applications, a group of multiple peer processes (peers) are required to cooperate with each other. In this paper, we discuss a heterogeneous hybridtime group communication (HHT) protocol which takes advantage of the linear time (LT) and physical time (PT) to causally order messages in a scalable heterogeneous group. It depends on accuracy of each physical clock and minimum delay time between a pair of peers how messages can be ordered. In this paper, we consider a heterogeneous type of group where the clock accuracy of each peer and the minimum delay time between every pair of peers are not the same. In group protocols, even if a pair of messages are ordered in the protocol, the messages may not be causally ordered. Thus, some messages are unnecessarily ordered in the protocols. In this paper, we show the number of messages to be unnecessarily ordered can be reduced in the HHT protocol. In a scalable group, it is not easy, maybe impossible for each peer to hold information on the clock accuracy and minimum delay time of every peer. In this paper, we newly consider a two-layered model of a heterogeneous group to reduce the information which each peer has to hold. Title BICM-OFDM for cooperative communications with multiple synchronization errors Abstract In this paper, a bit-interleaved coded modulation (BICM) scheme with an iterative receiver is proposed for asynchronous cooperative communications. In such environments, synchronous errors may be severe because relays experience different environments. Rather than treating synchronous errors as impairments, this study focuses on exploiting possible benefits of them, particularly of the possible multiple carrier frequency offsets (MCFOs). With the proposed scheme, potential space diversity of multiple relays with MCFOs is manifested as time diversity, which in turn is harvested through the carefully-designed iterative receiver. The system achieves improved diversity gain and is extremely robust even when all asynchronous parameters, including multiple Doppler effects, are considered. Title Boosted-OFDM scheme for 802.11n WLANs Abstract In this paper the performance of the boosted space-time diversity scheme is examined for a transmission over frequency-selective Rayleigh fading channel. The simulation scenario is similar to that of 802.11n Wireless Local Area Network (WLAN) specification. Two interleaver designs (ideal and pragmatic) are considered. Moreover, a few methods for early stopping of iterative decoding are developed to reduce the computational payload of the receiver routine. It is shown that a reasonable performance gain can be achieved by the boosted system in comparison with the 2x2 diversity scheme provided by the 802.11n specification. Title Cross-sensor coding techniques for low energy sensor networks Abstract This work addresses the uneven energy consumption problem in data gathering sensor networks where the nodes closer to the sink tend to consume more energy than those of the farther nodes. This energy unfairness can significantly shorten the life time of a sensor network. We propose a novel CCS Theory of computation Computational complexity and cryptography Circuit complexity CCS Theory of computation Computational complexity and cryptography Oracles and decision trees Title Alternating automata on data trees and XPath satisfiability Abstract A data tree is an unranked ordered tree whose every node is labeled by a letter from a finite alphabet and an element (“datum”) from an infinite set, where the latter can only be compared for equality. The article considers alternating automata on data trees that can move downward and rightward, and have one register for storing data. The main results are that nonemptiness over finite data trees is decidable but not primitive recursive, and that nonemptiness of safety automata is decidable but not elementary. The proofs use nondeterministic tree automata with faulty counters. Allowing upward moves, leftward moves, or two registers, each causes undecidability. As corollaries, decidability is obtained for two data-sensitive fragments of the XPath query language. Title Safety alternating automata on data words Abstract A data word is a sequence of pairs of a letter from a finite alphabet and an element from an infinite set, where the latter can only be compared for equality. Safety one-way alternating automata with one register on infinite data words are considered, their nonemptiness is shown to be ExpSpace-complete, and their inclusion decidable but not primitive recursive. The same complexity bounds are obtained for satisfiability and refinement, respectively, for the safety fragment of linear temporal logic with freeze quantification. Dropping the safety restriction, adding past temporal operators, or adding one more register, each causes undecidability. Title XPath satisfiability in the presence of DTDs Abstract We study the satisfiability problem associated with XPath in the presence of DTDs. This is the problem of determining, given a query Title Aggregation in multiagent systems and the problem of truth-tracking Abstract One of the major problems that artificial intelligence needs to tackle is the combination of different and potentially conflicting sources of information. Examples are multi-sensor fusion, database integration and expert systems development. In this paper we are interested in the aggregation of propositional logic-based information, a problem recently addressed in the literature on Title Deciding equivalences among conjunctive aggregate queries Abstract Equivalence of aggregate queries is investigated for the class of conjunctive queries with comparisons and the aggregate operators count, count-distinct, min, max, and sum. Essentially, this class contains unnested SQL queries with the above aggregate operators, with a where clause consisting of a conjunction of comparisons, and without a having clause. The comparisons are either interpreted over a domain with a dense order (like the rationals) or with a discrete order (like the integers). Characterizations of equivalence differ for the two cases. For queries with either max or min, equivalence is characterized in terms of dominance mappings, which can be viewed as a generalization of containment mappings. For queries with the count-distinct operator, a sufficient condition for equivalence is given in terms of equivalence of conjunctive queries under set semantics. For some special cases, it is shown that this condition is also necessary. For conjunctive queries with comparisons but without aggregation, equivalence under bag-set semantics is characterized in terms of isomorphism. This characterization essentially remains the same also for queries with the count operator. Moreover, this characterization also applies to queries with the sum operator if the queries have either constants or comparisons, but not both. In the general case (i.e., both comparisons and constants), the characterization of the equivalence of queries with the sum operator is more elaborate. All the characterizations given in the paper are decidable in polynomial space. Title On quantum versions of record-breaking algorithms for SAT Abstract Title Equivalences among aggregate queries with negation Abstract Title A new decidability technique for ground term rewriting systems with applications Abstract Title Satisfiability of word equations with constants is in PSPACE Abstract Title Classes of term rewrite systems with polynomial confluence problems Abstract CCS Theory of computation Computational complexity and cryptography Algebraic complexity theory CCS Theory of computation Computational complexity and cryptography Quantum complexity theory CCS Theory of computation Computational complexity and cryptography Proof complexity Title Deductive inference for the interiors and exteriors of horn theories Abstract In this article, we investigate deductive inference for interiors and exteriors of Horn knowledge bases, where interiors and exteriors were introduced by Makino and Ibaraki [1996] to study stability properties of knowledge bases. We present a linear time algorithm for deduction for interiors and show that deduction is coNP-complete for exteriors. Under model-based representation, we show that the deduction problem for interiors is NP-complete while the one for exteriors is coNP-complete. As for Horn envelopes of exteriors, we show that it is linearly solvable under model-based representation, while it is coNP-complete under formula-based representation. We also discuss polynomially solvable cases for all the intractable problems. Title From Almost Optimal Algorithms to Logics for Complexity Classes via Listings and a Halting Problem Abstract Let C denote one of the complexity classes “polynomial time,” “logspace,” or “nondeterministic logspace.” We introduce a logic Title Cheap, easy, and massively effective viral marketing in social networks: truth or fiction? Abstract Online social networks (OSNs) have become one of the most effective channels for marketing and advertising. Since users are often influenced by their friends, "word-of-mouth" exchanges so-called viral marketing in social networks can be used to increases product adoption or widely spread content over the network. The common perception of viral marketing about being cheap, easy, and massively effective makes it an ideal replacement of traditional advertising. However, recent studies have revealed that the propagation often fades quickly within only few hops from the sources, counteracting the assumption on the self-perpetuating of influence considered in literature. With only limited influence propagation, is massively reaching customers via viral marketing still affordable? How to economically spend more resources to increase the spreading speed? We investigate the cost-effective massive viral marketing problem, taking into the consideration the limited influence propagation. Both analytical analysis based on power-law network theory and numerical analysis demonstrate that the viral marketing might involve costly seeding. To minimize the seeding cost, we provide mathematical programming to find optimal seeding for medium-size networks and propose VirAds, an efficient algorithm, to tackle the problem on large-scale networks. VirAds guarantees a relative error bound of Title Short proofs for the determinant identities Abstract We study arithmetic proof systems P This yields a solution to a basic open problem in propositional proof complexity, namely, whether there are polynomial-size NC Title Real-time bidding algorithms for performance-based display ad allocation Abstract We describe a real-time bidding algorithm for performance-based display ad allocation. A central issue in performance display advertising is matching campaigns to ad impressions, which can be formulated as a constrained optimization problem that maximizes revenue subject to constraints such as budget limits and inventory availability. The current practice is to solve the optimization problem offline at a tractable level of impression granularity (e.g., the page level), and to serve ads online based on the precomputed static delivery scheme. Although this offline approach takes a global view to achieve optimality, it fails to scale to ad allocation at the individual impression level. Therefore, we propose a real-time bidding algorithm that enables fine-grained impression valuation (e.g., targeting users with real-time conversion data), and adjusts value-based bids according to real-time constraint snapshots (e.g., budget consumption levels). Theoretically, we show that under a linear programming (LP) primal-dual formulation, the simple real-time bidding algorithm is indeed an online solver to the original primal problem by taking the optimal solution to the dual problem as input. In other words, the online algorithm guarantees the offline optimality given the same level of knowledge an offline optimization would have. Empirically, we develop and experiment with two real-time bid adjustment approaches to adapting to the non-stationary nature of the marketplace: one adjusts bids against real-time constraint satisfaction levels using control-theoretic methods, and the other adjusts bids also based on the statistically modeled historical bidding landscape. Finally, we show experimental results with real-world ad delivery data that support our theoretical conclusions. Title On the competitive ratio of evaluating priced functions Abstract Let For the model where the costs of the variables are known, we present a Via the We also show how to extend the Title Lower Bounds for Coin-Weighing Problems Abstract Among a set of We demonstrate an exponential gap between the nonadaptive and adaptive coin-weighing complexities of the counting and parity problems. We prove a tight Title Logic of infons: The propositional case Abstract Infons are statements viewed as containers of information (rather then representations of truth values). The logic of infons turns out to be a conservative extension of logic known as constructive or intuitionistic. Distributed Knowledge Authorization Language uses additional unary connectives “ Title Monadic datalog over finite structures of bounded treewidth Abstract Bounded treewidth and monadic second-order (MSO) logic have proved to be key concepts in establishing fixed-parameter tractability results. Indeed, by Courcelle's Theorem we know that any property of finite structures, which is expressible by an MSO sentence, can be decided in linear time (data complexity) if the structures have bounded treewidth. In principle, Courcelle's Theorem can be applied directly to construct concrete algorithms by transforming the MSO evaluation problem into a tree language recognition problem. The latter can then be solved via a finite tree automaton (FTA). However, this approach has turned out to be problematical, since even relatively simple MSO formulae may lead to a “state explosion” of the FTA. In this work we propose monadic datalog (i.e., datalog where all intentional predicate symbols are unary) as an alternative method to tackle this class of fixed-parameter tractable problems. We show that if some property of finite structures is expressible in MSO then this property can also be expressed by means of a monadic datalog program over the Title On the border length minimization problem (BLMP) on a square array Abstract Protein/Peptide microarrays are rapidly gaining momentum in the diagnosis of cancer. High-density and high-throughput peptide arrays are being extensively used to detect tumor biomarkers, examine kinase activity, identify antibodies having low serum titers and locate antibody signatures. Improving the yield of microarray fabrication involves solving a hard combinatorial optimization problem called the The hierarchical refinement solver is available as an open-source code at http://launchpad.net/blm-solve. CCS Theory of computation Computational complexity and cryptography Interactive proof systems Title Deductive inference for the interiors and exteriors of horn theories Abstract In this article, we investigate deductive inference for interiors and exteriors of Horn knowledge bases, where interiors and exteriors were introduced by Makino and Ibaraki [1996] to study stability properties of knowledge bases. We present a linear time algorithm for deduction for interiors and show that deduction is coNP-complete for exteriors. Under model-based representation, we show that the deduction problem for interiors is NP-complete while the one for exteriors is coNP-complete. As for Horn envelopes of exteriors, we show that it is linearly solvable under model-based representation, while it is coNP-complete under formula-based representation. We also discuss polynomially solvable cases for all the intractable problems. Title From Almost Optimal Algorithms to Logics for Complexity Classes via Listings and a Halting Problem Abstract Let C denote one of the complexity classes “polynomial time,” “logspace,” or “nondeterministic logspace.” We introduce a logic Title Cheap, easy, and massively effective viral marketing in social networks: truth or fiction? Abstract Online social networks (OSNs) have become one of the most effective channels for marketing and advertising. Since users are often influenced by their friends, "word-of-mouth" exchanges so-called viral marketing in social networks can be used to increases product adoption or widely spread content over the network. The common perception of viral marketing about being cheap, easy, and massively effective makes it an ideal replacement of traditional advertising. However, recent studies have revealed that the propagation often fades quickly within only few hops from the sources, counteracting the assumption on the self-perpetuating of influence considered in literature. With only limited influence propagation, is massively reaching customers via viral marketing still affordable? How to economically spend more resources to increase the spreading speed? We investigate the cost-effective massive viral marketing problem, taking into the consideration the limited influence propagation. Both analytical analysis based on power-law network theory and numerical analysis demonstrate that the viral marketing might involve costly seeding. To minimize the seeding cost, we provide mathematical programming to find optimal seeding for medium-size networks and propose VirAds, an efficient algorithm, to tackle the problem on large-scale networks. VirAds guarantees a relative error bound of Title Short proofs for the determinant identities Abstract We study arithmetic proof systems P This yields a solution to a basic open problem in propositional proof complexity, namely, whether there are polynomial-size NC Title Real-time bidding algorithms for performance-based display ad allocation Abstract We describe a real-time bidding algorithm for performance-based display ad allocation. A central issue in performance display advertising is matching campaigns to ad impressions, which can be formulated as a constrained optimization problem that maximizes revenue subject to constraints such as budget limits and inventory availability. The current practice is to solve the optimization problem offline at a tractable level of impression granularity (e.g., the page level), and to serve ads online based on the precomputed static delivery scheme. Although this offline approach takes a global view to achieve optimality, it fails to scale to ad allocation at the individual impression level. Therefore, we propose a real-time bidding algorithm that enables fine-grained impression valuation (e.g., targeting users with real-time conversion data), and adjusts value-based bids according to real-time constraint snapshots (e.g., budget consumption levels). Theoretically, we show that under a linear programming (LP) primal-dual formulation, the simple real-time bidding algorithm is indeed an online solver to the original primal problem by taking the optimal solution to the dual problem as input. In other words, the online algorithm guarantees the offline optimality given the same level of knowledge an offline optimization would have. Empirically, we develop and experiment with two real-time bid adjustment approaches to adapting to the non-stationary nature of the marketplace: one adjusts bids against real-time constraint satisfaction levels using control-theoretic methods, and the other adjusts bids also based on the statistically modeled historical bidding landscape. Finally, we show experimental results with real-world ad delivery data that support our theoretical conclusions. Title On the competitive ratio of evaluating priced functions Abstract Let For the model where the costs of the variables are known, we present a Via the We also show how to extend the Title Lower Bounds for Coin-Weighing Problems Abstract Among a set of We demonstrate an exponential gap between the nonadaptive and adaptive coin-weighing complexities of the counting and parity problems. We prove a tight Title Logic of infons: The propositional case Abstract Infons are statements viewed as containers of information (rather then representations of truth values). The logic of infons turns out to be a conservative extension of logic known as constructive or intuitionistic. Distributed Knowledge Authorization Language uses additional unary connectives “ Title Monadic datalog over finite structures of bounded treewidth Abstract Bounded treewidth and monadic second-order (MSO) logic have proved to be key concepts in establishing fixed-parameter tractability results. Indeed, by Courcelle's Theorem we know that any property of finite structures, which is expressible by an MSO sentence, can be decided in linear time (data complexity) if the structures have bounded treewidth. In principle, Courcelle's Theorem can be applied directly to construct concrete algorithms by transforming the MSO evaluation problem into a tree language recognition problem. The latter can then be solved via a finite tree automaton (FTA). However, this approach has turned out to be problematical, since even relatively simple MSO formulae may lead to a “state explosion” of the FTA. In this work we propose monadic datalog (i.e., datalog where all intentional predicate symbols are unary) as an alternative method to tackle this class of fixed-parameter tractable problems. We show that if some property of finite structures is expressible in MSO then this property can also be expressed by means of a monadic datalog program over the Title On the border length minimization problem (BLMP) on a square array Abstract Protein/Peptide microarrays are rapidly gaining momentum in the diagnosis of cancer. High-density and high-throughput peptide arrays are being extensively used to detect tumor biomarkers, examine kinase activity, identify antibodies having low serum titers and locate antibody signatures. Improving the yield of microarray fabrication involves solving a hard combinatorial optimization problem called the The hierarchical refinement solver is available as an open-source code at http://launchpad.net/blm-solve. CCS Theory of computation Computational complexity and cryptography Complexity theory and logic Title Tight bounds for monotone switching networks via fourier analysis Abstract We prove tight size bounds on monotone switching networks for the k-clique problem, and for an explicit monotone problem by analyzing the generation problem with a pyramid structure of height h. This gives alternative proofs of the separations of m-NC from m-P and of m-NC Title Rational proofs Abstract We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our setting is that there no longer are "good" or "malicious" provers, but only rational ones. In essence, the Verifier has a budget c and gives the Prover a reward r ∈ [0,c] determined by the transcript of their interaction; the prover wishes to maximize his expected reward; and his reward is maximized only if he the verifier correctly learns f(x). Rational proof systems are as powerful as their classical counterparts for polynomially many rounds of interaction, but are much more powerful when we only allow a constant number of rounds. Indeed, we prove that if f ∈ #P, then f is computable by a one-round rational Merlin-Arthur game, where, on input x, Merlin's single message actually consists of sending just the value f(x). Further, we prove that CH, the counting hierarchy, coincides with the class of languages computable by a constant-round rational Merlin-Arthur game. Our results rely on a basic and crucial connection between rational proof systems and proper scoring rules, a tool developed to elicit truthful information from experts. Title Hierarchies for semantic classes Abstract Title On the space complexity of randomized synchronization Abstract Title How many queries are needed to learn? Abstract Title On the impact of forgetting on learning machines Abstract Title How reductions to sparse sets collapse the polynomial-time hierarchy: a primer: Part II restricted polynomial-time reductions Abstract Title The membership problem in aperiodic transformation monoids Abstract Title Lower bounds for the low hierarchy Abstract Title Relativized polynomial time hierarchies having exactly K levels Abstract CCS Theory of computation Computational complexity and cryptography Cryptographic primitives CCS Theory of computation Computational complexity and cryptography Cryptographic protocols CCS Theory of computation Logic Logic and verification Title Towards automatic verification of affine hybrid system stability Abstract Title Reasoning about systems with many processes Abstract Title PVS - design for a practical verification system Abstract Title An approach to program verification Abstract CCS Theory of computation Logic Proof theory Title Deductive inference for the interiors and exteriors of horn theories Abstract In this article, we investigate deductive inference for interiors and exteriors of Horn knowledge bases, where interiors and exteriors were introduced by Makino and Ibaraki [1996] to study stability properties of knowledge bases. We present a linear time algorithm for deduction for interiors and show that deduction is coNP-complete for exteriors. Under model-based representation, we show that the deduction problem for interiors is NP-complete while the one for exteriors is coNP-complete. As for Horn envelopes of exteriors, we show that it is linearly solvable under model-based representation, while it is coNP-complete under formula-based representation. We also discuss polynomially solvable cases for all the intractable problems. Title Verification of Periodically Controlled Hybrid Systems: Application to an Autonomous Vehicle Abstract This article introduces Periodically Controlled Hybrid Automata (PCHA) for modular specification of embedded control systems. In a PCHA, Title From Almost Optimal Algorithms to Logics for Complexity Classes via Listings and a Halting Problem Abstract Let C denote one of the complexity classes “polynomial time,” “logspace,” or “nondeterministic logspace.” We introduce a logic Title Cheap, easy, and massively effective viral marketing in social networks: truth or fiction? Abstract Online social networks (OSNs) have become one of the most effective channels for marketing and advertising. Since users are often influenced by their friends, "word-of-mouth" exchanges so-called viral marketing in social networks can be used to increases product adoption or widely spread content over the network. The common perception of viral marketing about being cheap, easy, and massively effective makes it an ideal replacement of traditional advertising. However, recent studies have revealed that the propagation often fades quickly within only few hops from the sources, counteracting the assumption on the self-perpetuating of influence considered in literature. With only limited influence propagation, is massively reaching customers via viral marketing still affordable? How to economically spend more resources to increase the spreading speed? We investigate the cost-effective massive viral marketing problem, taking into the consideration the limited influence propagation. Both analytical analysis based on power-law network theory and numerical analysis demonstrate that the viral marketing might involve costly seeding. To minimize the seeding cost, we provide mathematical programming to find optimal seeding for medium-size networks and propose VirAds, an efficient algorithm, to tackle the problem on large-scale networks. VirAds guarantees a relative error bound of Title Verification games: making verification fun Abstract Program verification is the only way to be certain that a given piece of software is free of (certain types of) errors --- errors that could otherwise disrupt operations in the field. To date, formal verification has been done by specially-trained engineers. Labor costs have heretofore made formal verification too costly to apply beyond small, critical software components. Our goal is to make verification more cost-effective by reducing the skill set required for program verification and increasing the pool of people capable of performing program verification. Our approach is to transform the verification task (a program and a goal property) into a visual puzzle task --- a game --- that gets solved by people. The solution of the puzzle is then translated back into a proof of correctness. The puzzle is engaging and intuitive enough that ordinary people can through game-play become experts. This paper presents a status report on the Verification Games project and our Pipe Jam prototype game. Title Short proofs for the determinant identities Abstract We study arithmetic proof systems P This yields a solution to a basic open problem in propositional proof complexity, namely, whether there are polynomial-size NC Title Simplification Rules for Intuitionistic Propositional Tableaux Abstract The implementation of a logic requires, besides the definition of a calculus and a decision procedure, the development of techniques to reduce the search space. In this article we introduce some simplification rules for Intuitionistic propositional logic that try to replace a formula with an equi-satisfiable “simpler” one with the aim to reduce the search space. Our results are proved via semantical techniques based on Kripke models. We also provide an empirical evaluation of their impact on implementations. Title On construction of a library of formally verified low-level arithmetic functions Abstract Most information security infrastructures rely on cryptography, which is usually implemented with low-level arithmetic functions. The formal verification of these functions therefore becomes a prerequisite to firmly assess any security property. We propose an approach for the construction of a library of formally verified low-level arithmetic functions that can be used to implement realistic cryptographic schemes in a trustful way. For that purpose, we introduce a formalization of data structures for signed multi-precision arithmetic and we experiment it with formal verification of basic functions, using Separation logic. Because this direct style of formal verification leads to technically involved specifications, we also propose for larger functions to show a formal simulation relation between pseudo-code and assembly. This is illustrated with the binary extended gcd algorithm. Title A closer look at aspect interference and cooperation Abstract In this work we consider specification and compositional verification for interference detection when several aspects are woven together under joint-weaving semantics without recursion. In this semantics, whenever a joinpoint of an aspect is reached, the corresponding advice is begun even if the joinpoint is inside the advice of other aspects. This captures most of the possible aspect interference cases in AspectJ. Moreover, the given technique is used to capture cooperation among aspects, which enhances modularity. The extended specification and proof obligations should provide insight to the possible interactions among aspects in a reusable library. Title Verification of software barriers Abstract This paper describes frontiers in verification of the software barrier synchronization primitive. So far most software barrier algorithms have not been mechanically verified. We show preliminary results in automatically proving the correctness of the major software barriers. CCS Theory of computation Logic Modal and temporal logics Title Annotated Probabilistic Temporal Logic: Approximate Fixpoint Implementation Abstract Annotated Probabilistic Temporal (APT) logic programs support building applications where we wish to reason about statements of the form “Formula Title Policy auditing over incomplete logs: theory, implementation and applications Abstract We present the design, implementation and evaluation of an algorithm that checks audit logs for compliance with privacy and security policies. The algorithm, which we name reduce, addresses two fundamental challenges in compliance checking that arise in practice. First, in order to be applicable to realistic policies, reduce operates on policies expressed in a first-order logic that allows restricted quantification over infinite domains. We build on ideas from logic programming to identify the restricted form of quantified formulas. The logic can, in particular, express all 84 disclosure-related clauses of the HIPAA Privacy Rule, which involve quantification over the infinite set of messages containing personal information. Second, since audit logs are inherently incomplete (they may not contain sufficient information to determine whether a policy is violated or not), reduce proceeds iteratively: in each iteration, it provably checks as much of the policy as possible over the current log and outputs a residual policy that can only be checked when the log is extended with additional information. We prove correctness, termination, time and space complexity results for reduce. We implement reduce and optimize the base implementation using two heuristics for database indexing that are guided by the syntactic structure of policies. The implementation is used to check simulated audit logs for compliance with the HIPAA Privacy Rule. Our experimental results demonstrate that the algorithm is fast enough to be used in practice. Title A graph-based fuzzy linguistic metadata schema for describing spatial relationships Abstract The spatial relationship description among objects is highly desirable for many research areas such as artificial intelligence and image analysis. In this paper we present a novel fuzzy logic method to automatically generate the description of spatial relationships among objects. A new graph-based fuzzy linguistic metadata schema named Snowflake is proposed to describe the topology and metric relationships for a set of objects. Like an artist painting a picture, Snowflake selects one reference object to present the spatial relationships of all the other objects with respect to this reference object. This paper introduces the operations and isomorphism of Snowflake. The paper also demonstrates that Snowflake preserves the rotation invariance and the scale invariance of spatial relationships. Experiments show that Snowflake is an efficient and effective spatial modeling method. Title Logics for information systems and their dynamic extensions Abstract The article proposes logics for Title Abstraction for epistemic model checking of dining cryptographers-based protocols Abstract The paper describes an abstraction for protocols that are based on multiple rounds of Chaum's Dining Cryptographers protocol. It is proved that the abstraction preserves a rich class of specifications in the logic of knowledge. This result is applied to optimize model checking of implementations of a knowledge-based program that uses the Dining Cryptographers protocol as a primitive in an anonymous broadcast system. Performance results are given for model checking knowledge-based specifications in the concrete and abstract models of this protocol, and some new conclusions about the protocol are derived. Title Sigma algebras in probabilistic epistemic dynamics Abstract This paper extends probabilistic dynamic epistemic logic from a finite setting to an infinite setting, by introducing σ-algebras to the probability spaces in the models. This may extend the applicability of the logic to a real world setting with infinitely many possible measurements. It is shown that the dynamics preserves desirable properties of measurability and that completeness of the proof system holds with the extended semantics. Title Perfect recall of imperfect knowledge Abstract Perfect recall, intuitively the ability to remember all past mental states, has been predominantly studied in the context of interpreted systems and game theory, which mostly consider S5 systems (of "correct" knowledge). More recently, the notion has become of interest to the epistemic logic community, where weaker systems are not unusual. Building upon recent work where we studied different definitions of perfect recall in Epistemic Temporal Logic (ETL), we argue that the intuitive motivations given there are still valid in such sub-S5 settings. However, definitions that were equivalent in S5 cease to be so without S5, so that these less restrictive settings allow for a more fine-grained comparison of the different definitions and their underlying intuitions. Title Exploring a theory of play Abstract We explore some recent directions for the logical foundations of social action that emerge from contacts between logic, game theory, philosophy, and computer science. Title Strategic communication Abstract We model games where players strategically exchange messages in a language for reasoning and strategically update their reasoning. The language for the stage game incorporates awareness and knowledge and extends [14]'s propositional quantification to quantification over all sentences in the language. The updating of reasoning is modeled as a strategic choice of the players and the dynamics of the logic provide constraints for this strategic update choice. A communication game is constructed using an underlying incomplete information game, the strategic choice of messages and the strategic and logic dynamics. Multiple games are described varying by how the game theoretic type-space relates to the language for reasoning. Title Equivalence of the information structure with unawareness to the logic of awareness Abstract This paper proves the Li (2009) unawareness structure equivalent to the single-agent propositionally generated logic of awareness of Fagin and Halpern (1988). For any model of one type one can construct a model of the other type describing the same belief and awareness. Li starts from an agent unable to perceive aspects of the world and distinguish states, modelled with subjective state spaces coarser than the objective state space. Fagin and Halpern limit the agent's language or cognitive ability to reasoning only about a subset of the propositions describing the world. Equivalence of these approaches suggests they capture a natural notion of unawareness in a minimal way. NA CCS Theory of computation Logic Automated reasoning Title Introduction to genetic programming tutorial: from the basics to human-competitive results Abstract The tutorial will start with a description of the problem addressed by genetic programming, a description of the basic genetic programming algorithm, and examples of applications. The tutorial will also describe advanced topics, such as use of a developmental process within genetic programming; implementations of automatically defined functions (subroutines), memory, iterations, recursions; parallel processing; the connection between Moore's Law and the results produced by genetic programming; and a brief survey of over 80 examples of human-competitive results using genetic programming. Title Empirical hardness models: Methodology and a case study on combinatorial auctions Abstract Is it possible to predict how long an algorithm will take to solve a previously-unseen instance of an NP-complete problem? If so, what uses can be found for models that make such predictions? This article provides answers to these questions and evaluates the answers experimentally. We propose the use of supervised machine learning to build models that predict an algorithm's runtime given a problem instance. We discuss the construction of these models and describe techniques for interpreting them to gain understanding of the characteristics that cause instances to be hard or easy. We also present two applications of our models: building algorithm portfolios that outperform their constituent algorithms, and generating test distributions that emphasize hard problems. We demonstrate the effectiveness of our techniques in a case study of the combinatorial auction winner determination problem. Our experimental results show that we can build very accurate models of an algorithm's running time, interpret our models, build an algorithm portfolio that strongly outperforms the best single algorithm, and tune a standard benchmark suite to generate much harder problem instances. Title Distribution replacement: how survival of the worst can out perform survival of the fittest Abstract A new family of "Distribution Replacement" operators for use in steady state genetic algorithms is presented. Distribution replacement enforces the members of the population to conform to an arbitrary statistical distribution, defined by its Cumulative Distribution Frequency, relative to the current best individual. As new superior individuals are discovered, the distribution "stretches" to accommodate the increased diversity, the exact opposite of convergence. Decoupling the maintenance of an optimal set of parents from the production of superior children allows the search to be freed from the traditional overhead of evolving a population of maximal fitness and, more significantly, avoids premature convergence. The population distribution has a significant effect on performance for a given problem, and in turn, the type of problem affects the performance of different distributions. Keeping mainly good individuals naturally does well on simple problems (as do distributions that exclude "median" individuals). With deceptive problems however, distributions which keep mainly bad individuals are shown to be superior to other replacement operators and also outperform classical generational genetic algorithms. In all cases, the uniform distribution proves suboptimal. This paper explains the details of distribution replacement, simulation experiments and discussions on the extension of this idea to a dynamic distribution. Title Improving the human readability of features constructed by genetic programming Abstract The use of machine learning techniques to automatically analyse data for information is becoming increasingly widespread. In this paper we examine the use of Genetic Programming and a Genetic Algorithm to pre-process data before it is classified by an external classifier. Genetic Programming is combined with a Genetic Algorithm to construct and select new features from those available in the data, a potentially significant process for data mining since it gives consideration to hidden relationships between features. We then examine techniques to improve the human readability of these new features and extract more information about the domain. Title Towards a self-stopping evolutionary algorithm using coupling from the past Abstract Title Abstract specialization and its applications Abstract Title What's the code?: automatic classification of source code archives Abstract Title Automatic time-bound analysis for a higher-order language Abstract Title Genetic algorithms Abstract Title Generating interesting scenarios from system descriptions Abstract CCS Theory of computation Logic Constraint and logic programming Title Complexity of conservative constraint satisfaction problems Abstract In a constraint satisfaction problem (CSP), the aim is to find an assignment of values to a given set of variables, subject to specified constraints. The CSP is known to be NP-complete in general. However, certain restrictions on the form of the allowed constraints can lead to problems solvable in polynomial time. Such restrictions are usually imposed by specifying a constraint language, that is, a set of relations that are allowed to be used as constraints. A principal research direction aims to distinguish those constraint languages that give rise to tractable CSPs from those that do not. We achieve this goal for the important version of the CSP, in which the set of values for each individual variable can be restricted arbitrarily. Restrictions of this type can be studied by considering those constraint languages which contain all possible unary constraints; we call such languages Title Finding partitions of arguments with Dung's properties via SCSPs Abstract Forming coalition structures allows agents to join their forces with the aim to achieve a common task. We suggest it would be interesting to look for homogeneous groups which follow distinct Title Combination: automated generation of puzzles with constraints Abstract Constraint Programming offers a powerful means of solving a wide variety of combinatorial problems. We have used this powerful paradigm to create a successful computer game called Combination. Combination is an application for the iPhone and iPod touch. It has been on sale internationally through the iTunes store since December, 2008 and received a number of positive reviews. In this paper we explain how all the levels of Combination were generated, checked for correctness and rated for difficulty completely automatically through the use of constraints. We go on to evaluate this method of creation with the use of a human evaluation. This showed that fun, immersing computer games can be created with constraint programming. Title Quotients revisited for Isabelle/HOL Abstract Higher-Order Logic (HOL) is based on a small logic kernel, whose only mechanism for extension is the introduction of safe definitions and of non-empty types. Both extensions are often performed in quotient constructions. To ease the work involved with such quotient constructions, we re-implemented in the Isabelle/HOL theorem prover the quotient package by Homeier. In doing so we extended his work in order to deal with compositions of quotients and also specified completely the procedure of lifting theorems from the raw level to the quotient level. The importance for theorem proving is that many formal verifications, in order to be feasible, require a convenient reasoning infrastructure for quotient constructions. Title MWeb: A principled framework for modular web rule bases and its semantics Abstract We present a principled framework for modular Web rule bases, called MWeb. According to this framework, each predicate defined in a rule base is characterized by its defining reasoning mode, scope, and exporting rule base list. Each predicate used in a rule base is characterized by its requesting reasoning mode and importing rule base list. For legal MWeb modular rule bases Title Recent developments in mega's proof search programming language Abstract Title A declarative approach to robust weighted Max-SAT Abstract The presence of uncertainty in the real world makes robustness to be a desired property of solutions to constraint satisfaction problems. Roughly speaking, a solution is robust if it can be easily repaired when unexpected events happen. This issue has already been addressed in the frameworks of Boolean satisfiability (SAT) and Constraint Programming (CP). Most works on robustness implement search algorithms to look for such solutions instead of taking the declarative approach of reformulation, since reformulation tends to generate prohibitively large formulas, especially in the CP setting. On the other hand, recent works suggest the use of SAT and Max-SAT encodings for solving CP instances. In this paper we present how robust solutions to weighted Max-SAT problems can be effectively obtained via reformulation into pseudo-Boolean formulae, thus providing a much flexible approach to robustness. We illustrate the use of our approach in the robust combinatorial auctions setting and provide some promising experimental results. Title Tabling for transaction logic Abstract Transaction Logic is a logic for representing declarative and procedural knowledge in logic programming, databases, and AI. It has been successful in areas as diverse as workflows and Web services, security policies, AI planning, reasoning about actions, and more. Although a number of implementations of Transaction Logic exist, none is logically complete due to the inherent difficulty and time/space complexity of such implementations. In this paper we attack this problem by first introducing a logically complete tabling evaluation strategy for Transaction Logic and then describing a series of optimizations, which make this algorithm practical. In support of our arguments, we present a performance evaluation study of six different implementations of this algorithm, each successively adopting our optimizations. The study suggest that the tabling algorithm can scale well both in time and space. We also discuss ideas that could improve the performance further. Title Scalable formula decomposition for propositional satisfiability Abstract Propositional satisfiability solving, or SAT, is an important reasoning task arising in numerous applications, such as circuit design, formal verification, planning, scheduling or probabilistic reasoning. The depth-first search DPLL procedure is in practice the most efficient complete algorithm to date. Previous studies have shown the theoretical and experimental advantages of decomposing propositional formulas to guide the ordering of variable instantiation in DPLL. However, in practice, the computation of a tree decomposition may require a considerable amount of time and space on large formulas; existing decomposition tools are unable to handle most currently challenging SAT instances because of their size. In this paper, we introduce a simple, fast and scalable method to quickly produce tree decompositions of large SAT problems. We show experimentally the efficiency of orderings derived from these decompositions on the solving of challenging benchmarks. Title The complexity of rooted phylogeny problems Abstract Several computational problems in phylogenetic reconstruction can be formulated as restrictions of the following general problem: given a formula in conjunctive normal form where the atomic formulas are CCS Theory of computation Logic Constructive mathematics CCS Theory of computation Logic Description logics CCS Theory of computation Logic Equational logic and rewriting CCS Theory of computation Logic Finite Model Theory Title Dynamic definability Abstract We investigate the logical resources required to maintain knowledge about a property of a finite structure that undergoes an ongoing series of local changes such as insertion or deletion of tuples to basic relations. Our framework is closely related to the Title The finite model theory toolbox of a database theoretician Abstract For many years, finite model theory was viewed as the backbone of database theory, and database theory in turn supplied finite model theory with key motivations and problems. By now, finite model theory has built a large arsenal of tools that can easily be used by database theoreticians without going to the basics such as combinatorial games. We survey such tools here, focusing not on how they are proved, but rather on how to apply them, as-is, in various questions that come up in database theory. NA Title Classical BI: a logic for reasoning about dualising resources Abstract We show how to extend O'Hearn and Pym's logic of bunched implications, BI, to classical BI (CBI), in which both the additive and the multiplicative connectives behave classically. Specifically, CBI is a non-conservative extension of (propositional) Boolean BI that includes multiplicative versions of falsity, negation and disjunction. We give an algebraic semantics for CBI that leads us naturally to consider resource models of CBI in which every resource has a unique dual. We then give a cut-eliminating proof system for CBI, based on Belnap's display logic, and demonstrate soundness and completeness of this proof system with respect to our semantics. Title Homomorphism preservation theorems Abstract The homomorphism preservation theorem (h.p.t.), a result in classical model theory, states that a first-order formula is preserved under homomorphisms on all structures (finite and infinite) if and only if it is equivalent to an existential-positive formula. Answering a long-standing question in finite model theory, we prove that the h.p.t. remains valid when restricted to finite structures (unlike many other classical preservation theorems, including the Łoś--Tarski theorem and Lyndon's positivity theorem). Applications of this result extend to constraint satisfaction problems and to database theory via a correspondence between existential-positive formulas and unions of conjunctive queries. A further result of this article strengthens the classical h.p.t.: we show that a first-order formula is preserved under homomorphisms on all structures if and only if it is equivalent to an existential-positive formula Title On preservation under homomorphisms and unions of conjunctive queries Abstract Title Convergence law for random graphs with specified degree sequence Abstract Title Model-theoretic semantics for the web Abstract Title Existential second-order logic over strings Abstract Title Relational expressive power of constraint query languages Abstract Title Model checking for programming languages using VeriSoft Abstract NA 243 Citations CCS Theory of computation Logic Higher order logic CCS Theory of computation Logic Linear logic CCS Theory of computation Logic Programming logic Title Automated error diagnosis using abductive inference Abstract When program verification tools fail to verify a program, either the program is buggy or the report is a false alarm. In this situation, the burden is on the user to manually classify the report, but this task is time-consuming, error-prone, and does not utilize facts already proven by the analysis. We present a new technique for assisting users in classifying error reports. Our technique computes small, relevant queries presented to a user that capture exactly the information the analysis is missing to either discharge or validate the error. Our insight is that identifying these missing facts is an instance of the Title Algebraic foundations for effect-dependent optimisations Abstract We present a general theory of Gifford-style type and effect annotations, where effect annotations are sets of effects. Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi's monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with operation symbols. We develop an annotated version of Levy's Call-by-Push-Value language with a kind of computations for every effect set; it can be thought of as a sequential, annotated intermediate language. We develop a range of validated optimisations (i.e., equivalences), generalising many existing ones and adding new ones. We classify these optimisations as structural, algebraic, or abstract: structural optimisations always hold; algebraic ones depend on the effect theory at hand; and abstract ones depend on the global nature of that theory (we give modularly-checkable sufficient conditions for their validity). Title Syntactic control of interference for separation logic Abstract Separation Logic has witnessed tremendous success in recent years in reasoning about programs that deal with heap storage. Its success owes to the fundamental principle that one should keep separate areas of the heap storage separate in program reasoning. However, the way Separation Logic deals with program variables continues to be based on traditional Hoare Logic without taking any benefit of the separation principle. This has led to unwieldy proof rules suffering from lack of clarity as well as questions surrounding their soundness. In this paper, we extend the separation idea to the treatment of variables in Separation Logic, especially Concurrent Separation Logic, using the system of Syntactic Control of Interference proposed by Reynolds in 1978. We extend the original system with permission algebras, making it more powerful and able to deal with the issues of concurrent programs. The result is a streamined presentation of Concurrent Separation Logic, whose rules are memorable and soundness obvious. We also include a discussion of how the new rules impact the semantics and devise static analysis techniques to infer the required permissions automatically. Title Nested refinements: a logic for duck typing Abstract Programs written in dynamic languages make heavy use of features --- run-time type tests, value-indexed dictionaries, polymorphism, and higher-order functions --- that are beyond the reach of type systems that employ either purely syntactic or purely semantic reasoning. We present a core calculus, System D, that merges these two modes of reasoning into a single powerful mechanism of nested refinement types wherein the typing relation is itself a predicate in the refinement logic. System D coordinates SMT-based logical implication and syntactic subtyping to automatically typecheck sophisticated dynamic language programs. By coupling nested refinements with McCarthy's theory of finite maps, System D can precisely reason about the interaction of higher-order functions, polymorphism, and dictionaries. The addition of type predicates to the refinement logic creates a circularity that leads to unique technical challenges in the metatheory, which we solve with a novel stratification approach that we use to prove the soundness of System D. Title Specification and verification of meta-programs Abstract This talk gives an overview of meta-programming, with an emphasis on recent developments in extending existing specification and verification technology to meta-programs. Title An innovative teaching tool based on semantic tableaux for verification and debugging of programs Abstract In this paper, we propose a new methodology based on a logic teaching tool on semantic tableaux that has been developed to help students to use logic as a formal proof technique in advanced topics of Computer Science, such as the formal verification of algorithms and the algorithmic debugging of imperative programs. Title SCRATCH: a tool for automatic analysis of dma races Abstract We present the SCRATCH tool, which uses bounded model checking and k-induction to automatically analyse software for multicore processors such as the Cell BE, in order to detect DMA races. Title Automatic safety proofs for asynchronous memory operations Abstract We present a work-in-progress proof system and tool, based on separation logic, for analysing memory safety of multicore programs that use asynchronous memory operations. Title The essence of monotonic state Abstract We extend a static type-and-capability system with new mechanisms for expressing the promise that a certain abstract value evolves monotonically with time; for enforcing this promise; and for taking advantage of this promise to establish non-trivial properties of programs. These mechanisms are independent of the treatment of mutable state, but combine with it to offer a flexible account of "monotonic state". We apply these mechanisms to solve two reasoning challenges that involve mutable state. First, we show how an implementation of thunks in terms of references can be assigned types that reflect time complexity properties, in the style of Danielsson (2008). Second, we show how an implementation of hash-consing can be assigned a specification that conceals the existence of an internal state yet guarantees that two pieces of input data receive the same hash code if and only if they are equal. Title Constructing datatype-generic fully polynomial-time approximation schemes using generalised thinning Abstract The CCS Theory of computation Logic Abstraction Title Ontology learning from text: A look back and into the future Abstract Ontologies are often viewed as the answer to the need for interoperable semantics in modern information systems. The explosion of textual information on the Read/Write Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas, such as natural language processing, have fueled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium and discusses the remaining challenges that will define the research directions in this area in the near future. Title Receipts2Go: the big world of small documents Abstract The Receipts2Go system is about the world of one-page documents: cash register receipts, book covers, cereal boxes, price tags, train tickets, fire extinguisher tags. In that world, we're exploring techniques for extracting accurate information from documents for which we have no layout descriptions -- indeed no initial idea of what the document's genre is -- using photos taken with cell phone cameras by users who aren't skilled document capture technicians. This paper outlines the system and reports on some initial results, including the algorithms we've found useful for cleaning up those document images, and the techniques used to extract and organize relevant information from thousands of similar-but-different page layouts. Title Making results fit into 40 characters: a study in document rewriting Abstract With the increasing popularity of mobile and hand-held devices, automatic approaches for adapting results to the limited screen size of mobile devices are becoming more important. Traditional approaches for reducing the length of textual results include summarization and snippet extraction. In this study, we investigate document rewriting techniques which retain the meaning and readability of the original text. Evaluations on different document sets show that i) rewriting documents considerably reduces document length and thus, scrolling effort on devices with limited screen size, and ii) the rewritten documents have a higher readability. Title A unified graph model for Chinese product review summarization using richer information Abstract With e-commerce growing rapidly, online product reviews open amounts of studies of extracting useful information from numerous reviews. How to generate informative and concise summaries from reviews automatically has become a critical issue. In this paper, we present a novel unified graph model, composited information graph (CIG), to represent reviews with lexical, topic and together with sentiment information. Based on the model, we propose an automatic approach to address this issue. We use probabilistic methods to model the lexical, topic and sentiment information separately, associate with the discovered information in the CIG model, and generate summaries with a HITS-like algorithm called Mix-HITS considering both the Title Automatic evaluation of video summaries Abstract This article describes a method for the automatic evaluation of video summaries based on the training of individual predictors for different quality measures from the TRECVid 2008 BBC Rushes Summarization Task. The obtained results demonstrate that, with a large set of evaluation data, it is possible to train fully automatic evaluation systems based on visual features automatically extracted from the summaries. The proposed approach will enable faster and easier estimation of the results of newly developed abstraction algorithms and the study of which summary characteristics influence their perceived quality. Title Optimized trace transform based content based image retrieval algorithm Abstract The last decade has seen a rapid increase in the use of visual information leading to storage and accessibility problems. To improve human access, there must be effective and precise retrieval algorithm for the user to search, browse and interact with these collections in real time. In this paper we propose a trace transform based content based image retrieval algorithm (TTB-CBIR). The proposed algorithm applies trace transform which is robust to affine transform for feature extraction. Similarity measure is done using hamming distance. The TTB-CBIR algorithm is tested on corel database of images. The proposed algorithm shows optimum performance in terms of memory utilization and retrieval time. Title An approach to summarizing Bengali news documents Abstract This paper describes a system that produces extractive summaries of Bengali news documents. The ultimate objective of produced summaries is defined as helping readers to determine whether they would be interested in reading a particular document. To this end, the summary aims to provide a reader with an idea about the theme of a document without revealing the in-depth detail. The approach presented here has four major steps (1) preprocessing (2) extraction of candidate summary sentences (3) ranking the candidate summary sentences (4) summary generation. The proposed approach defines TF*IDF, position and sentence length feature in more effective way that helps in improving the summarization performance. The experimental results show that the proposed text summarization approach outperforms the lead baseline and a more sophisticated baseline that uses TF*IDF and position features both. Title A Generic Approach for Systematic Analysis of Sports Videos Abstract Various innovative and original works have been applied and proposed in the field of sports video analysis. However, individual works have focused on sophisticated methodologies with particular sport types and there has been a lack of scalable and holistic frameworks in this field. This article proposes a solution and presents a systematic and generic approach which is experimented on a relatively large-scale sports consortia. The system aims at the event detection scenario of an input video with an orderly sequential process. Initially, domain knowledge-independent local descriptors are extracted homogeneously from the input video sequence. Then the video representation is created by adopting a bag-of-visual-words (BoW) model. The video’s genre is first identified by applying the k-nearest neighbor (k-NN) classifiers on the initially obtained video representation, and various dissimilarity measures are assessed and evaluated analytically. Subsequently, an unsupervised probabilistic latent semantic analysis (PLSA)-based approach is employed at the same histogram-based video representation, characterizing each frame of video sequence into one of four view groups, namely closed-up-view, mid-view, long-view, and outer-field-view. Finally, a hidden conditional random field (HCRF) structured prediction model is utilized for interesting event detection. From experimental results, k-NN classifier using KL-divergence measurement demonstrates the best accuracy at 82.16% for genre categorization. Supervised SVM and unsupervised PLSA have average classification accuracies at 82.86% and 68.13%, respectively. The HCRF model achieves 92.31% accuracy using the unsupervised PLSA based label input, which is comparable with the supervised SVM based input at an accuracy of 93.08%. In general, such a systematic approach can be widely applied in processing massive videos generically. Title Computing bounded reach sets from sampled simulation traces Abstract This paper presents an algorithm which uses simulation traces and formal models for computing overapproximations of reach sets of deterministic hybrid systems. The implementation of the algorithm in a tool, Title SEEP: exploiting symbolic execution for energy-aware programming Abstract In recent years, there has been a rapid evolution of energyaware computing systems (e.g., mobile devices, wireless sensor nodes), as still rising system complexity and increasing user demands make energy a permanently scarce resource. While static and dynamic optimizations for energy-aware execution have been explored massively, writing energyefficient programs in the first place has only received limited attention. This paper proposes SEEP, a framework which exploits symbolic execution and platform-specific energy profiles to provide the basis for CCS Theory of computation Logic Verification by model checking Title Using an interdisciplinary approach to develop a knowledge-driven careflow management system for collaborative patient-centred palliative care Abstract In this paper, we give a work-in-progress report of an interdisciplinary partnership among academic researchers, a regional health authority and an industry partner to develop a web-based platform to support a collaborative approach to hospice palliative care. The needs of such a collaborative, community based and patient and family centred program are outlined with emphasis on the guiding principles, as espoused by the Canadian Hospice Palliative Care Association. The research initiatives to develop a leading edge knowledge-driven web-based communication, documentation and process management platform, a so-called careflow management system, to support such a program are detailed. Considerations required by an industry partner to ensure such a platform is commercializable are discussed. The long term goal is to inject science into software by providing tools which are sensitive to local conditions, are flexible enough to adapt to changes on the fly, are easily refactored to adapt to diverse settings and can support the evolving electronic health record systems. Title Experience modeling and analyzing medical processes: UMass/baystate medical safety project overview Abstract This paper provides an overview of the UMass/Baystate Medical Safety project, which has been developing and evaluating tools and technology for modeling and analyzing medical processes. We describe the tools that currently comprise the Process Improvement Environment, PIE. For each tool, we illustrate the kinds of information that it provides and discuss how that information can be used to improve the modeled process as well as provide useful information that other tools in the environment can leverage. Because the process modeling notation that we use has rigorously defined semantics and supports creating relatively detailed process models (for example, our models can specify alternative ways of dealing with exceptional behavior and concurrency), a number of powerful analysis techniques can be applied. The cost of eliciting and maintaining such a detailed model is amortized over the range of analyses that can be applied to detect errors, vulnerabilities, and inefficiencies in an existing process or in proposed process modifications before they are deployed. Title Behavioural modelling and verification of real-time software product lines Abstract In Software Product Line (SPL) engineering, software products are build in families rather than individually. Many critical software are nowadays build as SPLs and most of them obey hard real-time requirements. Formal methods for verifying SPLs are thus crucial and actively studied. The verification problem for SPL is, however, more complicated than for individual systems; the large number of different software products multiplies the complexity of SPL model-checking. Recently, promising model-checking approaches have been developed specifically for SPLs. They leverage the commonality between the products to reduce the verification effort. However, none of them considers real time. In this paper, we combine existing SPL verification methods with established model-checking procedures for real-time systems. We introduce Featured Timed Automata (FTA), a formalism that extends the classical Timed Automata with constructs for modelling variability. We show that FTA model-checking can be achieved through a smart combination of real-time and SPL model checking. Title Towards an incremental automata-based approach for software product-line model checking Abstract Most model-checking algorithms are based on automata theory. For instance, determining whether or not a transition system satisfies a Linear Temporal Logic (LTL) formula requires computing strongly connected component of its transition graph. In Software Product-Line (SPL) engineering, the model checking problem is more complex due to the huge amount of software products that may compose the line. Indeed, one has to determine the exact subset of those products that do not satisfy an intended property. Efficient dedicated verification methods have been recently developed to answer this problem. However, most of them does not allow incremental verification. In this paper, we introduce an automata-based incremental approach for SPL model checking. Our method makes use of previous results to determine whether or not the addition of conservative features ( Title Verification of Safety and Liveness Properties of Metric Transition Systems Abstract We consider verification problems for transition systems enriched with a metric structure. We believe that these metric transition systems are particularly suitable for the analysis of cyber-physical systems in which metrics can be naturally defined on the numerical variables of the embedded software and on the continuous states of the physical environment. We consider verification of bounded and unbounded safety properties, as well as bounded liveness properties. The transition systems we consider are nondeterministic, finitely branching, and with a finite set of initial states. Therefore, bounded safety/liveness properties can always be verified by exhaustive exploration of the system trajectories. However, this approach may be intractable in practice, as the number of trajectories usually grows exponentially with respect to the considered bound. Furthermore, since the system we consider can have an infinite set of states, exhaustive exploration cannot be used for unbounded safety verification. For bounded safety properties, we propose an algorithm which combines exploration of the system trajectories and state space reduction using merging based on a bisimulation metric. The main novelty compared to an algorithm presented recently by Lerda et al. [2008] consists in introducing a tuning parameter that improves the performance drastically. We also establish a procedure that allows us to prove unbounded safety from the result of the bounded safety algorithm via a refinement step. We then adapt the algorithm to handle bounded liveness verification. Finally, the effectiveness of the approach is demonstrated by applying it to the analysis of implementations of an embedded control loop. Title Symbolic consistency checking of OpenMp parallel programs Abstract We present a symbolic approach for checking consistency of OpenMP parallel programs. A parallel program is consistent if it yields the same result as its sequential version despite the execution order among threads. We find race conditions of an OpenMP parallel program, construct the formal model of its raced segments under relaxed memory models, and perform guided symbolic simulation to search consistency violations. The simulation terminates when (1) a witness has been found (the program is inconsistent), or (2) all reachable states have been explored (the program is consistent). We have developed the tool Pathg by incorporating Omega library to solve race constraints and Red symbolic simulator to perform guided search. We show that Pathg can prove consistency of programs, identify races that modern OpenMP checkers failed to report, and find inconsistency witnesses effectively against benchmarks from the OpenMP Source Code Repository and the NAS Parallel benchmark suite. Title Symbolic model checking on SystemC designs Abstract SystemC is a de-facto standard for modeling system-level designs in the early design stage. Verifying SystemC designs is critical in the design process since it can avoid error propagation down to the final implementation. Recent works exploit the software model checking techniques to tackle this important issue. But they abstract away relevant semantic aspects or show limited scalability. In this paper, we devise a symbolic model checking technique using bounded model checking and induction to formally verify SystemC designs. We introduce the notions of behavioral states and transitions to guarantee the soundness of our approach. The experiments show the scalability and the efficiency of our method. Title User-friendly approach for handling performance parameters during predictive software performance engineering Abstract A Software Product Line (SPL) is a set of similar software systems that share a common set of features. Instead of building each product from scratch, SPL development takes advantage of the reusability of the core assets shared among the SPL members. In this work, we integrate performance analysis in the early phases of SPL development process, applying the same reusability concept to the performance annotations. Instead of annotating from scratch the UML model of every derived product, we propose to annotate the SPL model once with generic performance annotations. After deriving the model of a product from the family model by an automatic transformation, the generic performance annotations need to be bound to concrete product-specific values provided by the developer. Dealing manually with a large number of performance annotations, by asking the developer to inspect every diagram in the generated model and to extract these annotations is an error-prone process. In this paper we propose to automate the collection of all generic parameters from the product model and to present them to the developer in a user-friendly format (e.g., a spreadsheet per diagram, indicating each generic parameter together with guiding information that helps the user in providing concrete binding values). There are two kinds of generic parametric annotations handled by our approach: product-specific (corresponding to the set of features selected for the product) and platform-specific (such as device choices, network connections, middleware, and runtime environment). The following model transformations for (a) generating a product model with generic annotations from the SPL model, (b) building the spreadsheet with generic parameters and guiding information, and (c) performing the actual binding are all realized in the Atlas Transformation Language (ATL). Title A Büchi automata based model checking framework for reo connectors Abstract Reo is an exogenous coordination language for synchronizing components participating in a component-based system. In this paper we provide a verification framework for model checking of Reo connectors. The proposed framework applies an extension of Büchi automata as the operational semantic model for Reo connectors and a record-based extension of linear time temporal logic (LTL) for expressing properties. Several aspects of Reo connectors, specially synchronization, context dependencies and fairness constraints, are addressed by this model checker due to its supported underlying model. The main ideas behind this implementation are to introduce a symbolic representation for the main elements of our model checking framework, adapt some existing theories to our verification context and develop a new BDD-based model checker with efficient performance. Moreover, all above mentioned features of Reo connectors are addressed by this toolkit. This implementation is evaluated by means of some case studies and the results are reported. Title A model checker for Bigraphs Abstract We present a model checking tool for Bigraphical Reactive Systems that may be instantiated as a model checker for any formalism or domain-specific modelling language encoded as a Bigraphical Reactive System. We describe the implementation of the tool, and how it can be used to verify correctness properties of some infinite-state models by applying a static analysis to reaction rules that permits the exclusion of some infinite branches of execution shown to always be free of violations. We give a proof of correctness for this method, and illustrate the usage of the tool with two examples --- a textbook implementation of the Dining Philosophers problem, and an example motivated by a ubiquitous computing application. CCS Theory of computation Logic Type theory Title Permissive-nominal logic: First-order logic over nominal terms and sets Abstract Permissive-Nominal Logic (PNL) is an extension of first-order predicate logic in which term-formers can bind names in their arguments. This allows for direct axiomatizations with binders, such as of the λ-binder of the lambda-calculus or the ∀-binder of first-order logic. It also allows us to finitely axiomatize arithmetic, and similarly to axiomatize “nominal” datatypes-with-binding. Just like first- and higher-order logic, equality reasoning is not necessary to α-rename. This gives PNL much of the expressive power of higher-order logic, but models and derivations of PNL are first-order in character, and the logic seems to strike a good balance between expressivity and simplicity. Title A simple NP-hard problem Abstract Title Decidability results for sets with atoms Abstract Title File under "Unknowable!" Abstract Title The importance of being biased Abstract Title String realizers of posets with applications to distributed computing Abstract NA Title A method for deciding whether the Galois group is abelian Abstract Title Constructing endomorphism rings via duals Abstract Title How to check if a finitely generated commutative monoid is a principal ideal commutative monoid Abstract Title Discovery through rough set theory Abstract CCS Theory of computation Logic Hoare logic CCS Theory of computation Logic Separation logic CCS Theory of computation Design and analysis of algorithms Graph algorithms analysis CCS Theory of computation Design and analysis of algorithms Approximation algorithms analysis CCS Theory of computation Design and analysis of algorithms Mathematical optimization CCS Theory of computation Design and analysis of algorithms Data structures design and analysis CCS Theory of computation Design and analysis of algorithms Online algorithms CCS Theory of computation Design and analysis of algorithms Parameterized complexity and exact algorithms CCS Theory of computation Design and analysis of algorithms Streaming, sublinear and near linear time algorithms CCS Theory of computation Design and analysis of algorithms Parallel algorithms CCS Theory of computation Design and analysis of algorithms Distributed algorithms CCS Theory of computation Design and analysis of algorithms Algorithm design techniques CCS Theory of computation Design and analysis of algorithms Concurrent algorithms CCS Theory of computation Randomness, geometry and discrete structures Pseudorandomness and derandomization CCS Theory of computation Randomness, geometry and discrete structures Computational geometry Title ACM workshop on 3d object retrieval: 3DOR'10 chair's welcome Abstract 3D media has emerged rapidly as a new type of content within the multimedia domain. The recent acceleration of 3D content production, witnessed across all fields up to user-generated content, is causing a huge amount of traffic and data stored and transmitted using Internet technologies. Recent advances in 3D acquisition and 3D graphics rendering technologies boosted the creation of 3D model archives for several application domains. These include archaeology and cultural heritage, computer-assisted design (CAD), medicine and bioinformatics, 3D face recognition and security, entertainment and serious gaming, spatial data and 3D city management. Search engines will soon become a key interaction tool for engaging with this data deluge, and 3D content-based retrieval methods will be crucial in the development of effective 3D search engines: visual media are meant to be seen and should be searched accordingly. 3D content-based retrieval is attracting researchers from different fields: computer vision, computer graphics, machine learning, human-computer interaction, and the semantic web. Since 2008, a series of workshops specifically devoted to the topic was initiated under the auspices of the Eurographics association. The first EG 3D Object Retrieval (3DOR) workshop took place in Crete, April 2008, followed by 3DOR'09 in Munich, March 2009, and 3DOR'10 in Nörkopping, May 2010. The response of the community in all these years was encouraging in terms of number of submission and attendance rate. Due to the co-location of the 3DOR workshop with the Eurographics conference, the events primarily addressed the computer graphics community. Now, the co-location with ACM Multimedia 2010, the worldwide premier multimedia conference, gave us the opportunity to meet the multimedia community and further promote a cross-fertilization ground that hopefully will stimulate further discussions on the next steps in this important research area. The response to the call for participation was a success: even if scheduled shortly after the EG 3DOR'10 workshop, the ACM 3DOR'10 received 24 full paper submissions on various topics related to 3D retrieval, ranging from new indexing methods for generic 3D models to context-specific methods, such as face recognition and molecular data analysis. Out of the 24 submissions received, 7 contributions were accepted as oral papers (acceptance rate 30%), and 7 as poster papers. The ACM 3DOR'10 workshop will feature a one-day technical programme, with the presentation of the full papers and poster session. The invited talk given by Prof. Anuj Srivastava on Elastic Riemannian Frameworks and Statistical Tools for Shape Analysis complements the programme. The 3D Object Retrieval workshops gathered and continues to gather great interest in the research community and there are several people we would like to thank for keeping alive this interest: first of all, we would like to acknowledge and thank Ioannis Patrikakis (Democritus University of Thrace, Greece) and Theoharis Theoharis (University of Athens, Greece) for having started the 3DOR workshop series; Alberto Del Bimbo, for the encouragement to bring 3DOR closer to ACM Multimedia 2010; the ACM - 3DOR'10 PC members and reviewers for their efforts and commitment; all the authors of the submitted papers that are demonstrating the importance of the topic. We would like to thank the Institut TELECOM for the financial support. We look forward to the next event on 3D Object Retrieval. Title Entropy, triangulation, and point location in planar subdivisions Abstract A data structure is presented for point location in connected planar subdivisions when the distribution of queries is known in advance. The data structure has an expected query time that is within a constant factor of optimal. More specifically, an algorithm is presented that preprocesses a connected planar subdivision Title Plastic trees: interactive self-adapting botanical tree models Abstract We present a dynamic tree modeling and representation technique that allows complex tree models to interact with their environment. Our method uses changes in the light distribution and proximity to solid obstacles and other trees as approximations of biologically motivated transformations on a skeletal representation of the tree's main branches and its procedurally generated foliage. Parts of the tree are transformed only when required, thus our approach is much faster than common algorithms such as Open L-Systems or space colonization methods. Input is a skeleton-based tree geometry that can be computed from common tree production systems or from reconstructed laser scanning models. Our approach enables content creators to directly interact with trees and to create visually convincing ecosystems interactively. We present different interaction types and evaluate our method by comparing our transformations to biologically based growth simulation techniques. Title An algebraic model for parameterized shape editing Abstract We present an approach to high-level shape editing that adapts the structure of the shape while maintaining its global characteristics. Our main contribution is a new algebraic model of shape structure that characterizes shapes in terms of linked translational patterns. The space of shapes that conform to this characterization is parameterized by a small set of numerical parameters bounded by a set of linear constraints. This convex space permits a direct exploration of variations of the input shape. We use this representation to develop a robust interactive system that allows shapes to be intuitively manipulated through sparse constraints. Title Dual loops meshing: quality quad layouts on manifolds Abstract We present a theoretical framework and practical method for the automatic construction of simple, all-quadrilateral patch layouts on manifold surfaces. The resulting layouts are coarse, surface-embedded cell complexes well adapted to the geometric structure, hence they are ideally suited as domains and base complexes for surface parameterization, spline fitting, or subdivision surfaces and can be used to generate quad meshes with a high-level patch structure that are advantageous in many application scenarios. Our approach is based on the careful construction of the layout graph's combinatorial dual. In contrast to the primal this dual perspective provides direct control over the globally interdependent structural constraints inherent to quad layouts. The dual layout is built from curvature-guided, crossing loops on the surface. A novel method to construct these efficiently in a geometry- and structure-aware manner constitutes the core of our approach. Title CrossShade: shading concept sketches using cross-section curves Abstract We facilitate the creation of 3D-looking shaded production drawings from concept sketches. The key to our approach is a class of commonly used construction curves known as The technical contribution of our work is twofold. First, we distill artistic guidelines for drawing cross-sections and insights from perception literature to introduce an explicit mathematical formulation of the relationships between cross-section curves and the geometry they aim to convey. We then use these relationships to develop an algorithm for estimating a normal field from cross-section curve networks and other curves present in concept sketches. We validate our formulation and algorithm through a user study and a ground truth normal comparison. As demonstrated by the examples throughout the paper, these contributions enable us to shade a wide range of concept sketches with a variety of rendering styles. Title Stitch meshes for modeling knitted clothing with yarn-level detail Abstract Recent yarn-based simulation techniques permit realistic and efficient dynamic simulation of knitted clothing, but producing the required yarn-level models remains a challenge. The lack of practical modeling techniques significantly limits the diversity and complexity of knitted garments that can be simulated. We propose a new modeling technique that builds yarn-level models of complex knitted garments for virtual characters. We start with a polygonal model that represents the large-scale surface of the knitted cloth. Using this mesh as an input, our interactive modeling tool produces a finer mesh representing the layout of stitches in the garment, which we call the Title A probabilistic model for component-based shape synthesis Abstract We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. Title How to walk your dog in the mountains with no magic leash Abstract We describe a O(log n)-approximation algorithm for computing the homotopic Frechet distance between two polygonal curves that lie on the boundary of a triangulated topological disk. Prior to this work, algorithms where known only for curves on the Euclidean plane with polygonal obstacles. A key technical ingredient in our analysis is a O(log n)-approximation algorithm for computing the minimum height of a homotopy between two curves. No algorithms were previously known for approximating this parameter. Surprisingly, it is not even known if computing either the homotopic Frechet distance, or the minimum height of a homotopy, is in NP. Title Approximating Tverberg points in linear time for any fixed dimension Abstract Let P be a d-dimensional n-point set. A Tverberg partition of P is a partition of P into r sets P1, ..., Pr such that the convex hulls ch(P1), ..., ch(Pr) have non-empty intersection. A point in the intersection of the convex hulls is called a Tverberg point of depth r for P. A classic result by Tverberg implies that there always exists a Tverberg partition of size n/(d+1), but it is not known how to find such a partition in polynomial time. Therefore, approximate solutions are of interest. We describe a deterministic algorithm that finds a Tverberg partition of size n/4(d+1) CCS Theory of computation Randomness, geometry and discrete structures Generating random combinatorial structures CCS Theory of computation Randomness, geometry and discrete structures Random walks and Markov chains CCS Theory of computation Randomness, geometry and discrete structures Expander graphs and randomness extractors CCS Theory of computation Randomness, geometry and discrete structures Error-correcting codes Title Proving a specific type of inequality theorems in ACL2: a bind-free experience report Abstract We describe how we guide ACL2 to follow a divide-andconquer strategy for proving inequalities of the type | Our approach involves (1) writing an ACL2 program to estimate the upper-bound of such polynomials and (2) using the bind-free mechanism to integrate the upper-bound estimation program to guide rewriting. We think it is interesting to showcase how we extract the relevant information from the hypothesis and how such information is used to influence rewriting. Techniques like ours can be useful to ACL2 users who want to better control rewriting when their problems share specific characteristics with our | Title Implementing the complex arcsine and arccosine functions using exception handling Abstract Title An example of error propagation reinterpreted as subtractive cancellation—revisited Abstract Title For unknown-but-bounded errors, interval estimates are often better than averaging Abstract Title When is double rounding innocuous? Abstract Title Determining accuracy bounds for simulation-based switching activity estimation Abstract Title Corrigenda: intersection algorithms for lines and circles Abstract Title How accurate should numerical routines be? Abstract Title Efficient evaluation of the area under the normal curve Abstract Title Hardware configuration selection through discretizing a continuous variable solution Abstract CCS Theory of computation Randomness, geometry and discrete structures Random projections and metric embeddings Title Optimizing content-preserving projections for wide-angle images Abstract Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort the shapes of scene objects. We present a method that minimizes this distortion by adapting the projection to content in the scene, such as salient scene regions and lines, in order to preserve their shape. Our optimization technique computes a spatially-varying projection that respects user-specified constraints while minimizing a set of energy terms that measure wide-angle image distortion. We demonstrate the effectiveness of our approach by showing results on a variety of wide-angle photographs, as well as comparisons to standard projections. Title Designing nonperspective projection through screen-space manipulation Abstract Title Robust moving least-squares fitting with sharp features Abstract Title Removing photography artifacts using gradient projection and flash-exposure sampling Abstract Title Applying scheduling and tuning to on-line parallel tomography Abstract Title Inference of a 3-D object from a random partial 2-D projection Abstract Title Stereoscopic projections and 3D scene reconstruction Abstract Title Skewed projections with an application to line stabbing in R3 Abstract Title Oriented projective geometry Abstract Title On detecting the orientation of polygons and polyhedra Abstract CCS Theory of computation Randomness, geometry and discrete structures Random network models CCS Theory of computation Randomness, geometry and discrete structures Random search heuristics CCS Theory of computation Theory and algorithms for application domains Machine learning theory CCS Theory of computation Theory and algorithms for application domains Algorithmic game theory and mechanism design CCS Theory of computation Theory and algorithms for application domains Database theory CCS Theory of computation Theory and algorithms for application domains Theory of randomized search heuristics CCS Theory of computation Semantics and reasoning Program constructs CCS Theory of computation Semantics and reasoning Program semantics CCS Theory of computation Semantics and reasoning Program reasoning CCS Mathematics of computing Discrete mathematics Combinatorics CCS Mathematics of computing Discrete mathematics Graph theory CCS Mathematics of computing Probability and statistics Probabilistic representations CCS Mathematics of computing Probability and statistics Probabilistic inference problems CCS Mathematics of computing Probability and statistics Probabilistic reasoning algorithms CCS Mathematics of computing Probability and statistics Probabilistic algorithms Title Tight bounds on information dissemination in sparse mobile networks Abstract Motivated by the growing interest in mobile systems, we study the dynamics of information dissemination between agents moving independently on a plane. Formally, we consider Title A generative framework for predictive modeling using variably aggregated, multi-source healthcare data Abstract Many measures of healthcare delivery or quality are not publicly available at the individual patient or hospital level largely due to privacy restrictions, legal issues or reporting norms. Instead, such measures are provided at a higher or more aggregated level, such as state-level, county-level summaries or averages over health zones (HRR Title Bayesian Kriging Analysis and Design for Stochastic Simulations Abstract Kriging is an increasingly popular metamodeling tool in simulation due to its flexibility in global fitting and prediction. In the fitting of this metamodel, the parameters are often estimated from the simulation data, which introduces parameter estimation uncertainties into the overall prediction error. Traditional plug-in estimators usually ignore these uncertainties, which can be substantial in stochastic simulations. This typically leads to an underestimation of the total variability and an overconfidence in the results. In this article, a Bayesian metamodeling approach for kriging prediction is proposed for stochastic simulations to more appropriately account for the parameter uncertainties. We derive the predictive distribution under certain assumptions and also provide a general Markov Chain Monte Carlo analysis approach to handle more general assumptions on the parameters and design. Numerical results indicate that the Bayesian approach has better coverage and better predictive variance than a previously proposed modified nugget effect kriging model, especially in cases where the stochastic variability is high. In addition, we further consider the important problem of planning the experimental design. We propose a two-stage design approach that systematically balances the allocation of computing resources to new design points and replication numbers in order to reduce the uncertainties and improve the accuracy of the predictions. Title Event processing under uncertainty Abstract Big data is recognized as one of the three technology trends at the leading edge a CEO cannot afford to overlook in 2012. Big data is characterized by volume, velocity, variety and veracity ("data in doubt"). As big data applications, many of the emerging event processing applications must process events that arrive from sources such as sensors and social media, which have inherent uncertainties associated with them. Consider, for example, the possibility of incomplete data streams and streams including inaccurate data. In this tutorial we classify the different types of uncertainty found in event processing applications and discuss the implications on event representation and reasoning. An area of research in which uncertainty has been studied is Artificial Intelligence. We discuss, therefore, the main Artificial Intelligence-based event processing systems that support probabilistic reasoning. The presented approaches are illustrated using an example concerning crime detection. Title A basic model for proactive event-driven computing Abstract During the movie "Source Code" there is a shift in the plot; from (initially) reacting to a train explosion that already occurred and trying to eliminate further explosions, to (later) changing the reality to avoid the original train explosion. Whereas changing the history after events have happened is still within the science fiction domain, changing the reality to avoid events that have not happened yet is, in many cases, feasible, and may yield significant benefits. We use the term proactive behavior to designate the change of what will be reality in the future. In particular, we focus on proactive event-driven computing: the use of event-driven systems to predict future events and react to them before they occur. In this paper we start our investigation of this large area by constructing a model and end-to-end implementation of a restricted subset of basic proactive applications that is trying to eliminate a single forecasted event, selecting between a finite and relatively small set of feasible actions, known at design time, based on quantified cost functions over time. After laying out the model, we describe the extensions required of the conceptual architecture of event processing to support such applications: supporting proactive agents as part of the model, supporting the derivation of forecasted events, and supporting various aspects of uncertainty; next, we show a decision algorithm that selects among the alternatives. We demonstrate the approach by implementing an example of a basic proactive application in the area of condition based maintenance, and showing experimental results. Title Integrating particle swarm optimization with reinforcement learning in noisy problems Abstract Noisy optimization problems arise very often in real-life applications. A common practice to tackle problems characterized by uncertainties, is the re-evaluation of the objective function at every point of interest for a fixed number of replications. The obtained objective values are then averaged and their mean is considered as the approximation of the actual objective value. However, this approach can prove inefficient, allocating replications to unpromising candidate solutions. We propose a hybrid approach that integrates the established Particle Swarm Optimization algorithm with the Reinforcement Learning approach to efficiently tackle noisy problems by intelligently allocating the available computational budget. Two variants of the proposed approach, based on different selection schemes, are assessed and compared against the typical alternative of equal sampling. The results are reported and analyzed, offering significant evidence regarding the potential of the proposed approach. Title Hybrid metaheuristic particle filters for stochastic volatility estimation Abstract In this paper we propose hybrid metaheuristic particle filters for the dual estimation of state and parameters in a stochastic volatility estimation problem. We use evolutionary strategies and real coded genetic algorithms as the metaheuristics. The hybrid metaheuristic particle filters provide accurate results while using lesser number of particles for this high dimension estimation problem. We compare the performance of our hybrid algorithms with a sequential importance resampling particle filter (SIR) and the parameter learning algorithm (PLA). Our hybrid particle filters out perform both these algorithms for this particular dual estimation problem. Title Beyond random walk and metropolis-hastings samplers: why you should not backtrack for unbiased graph sampling Abstract Graph sampling via crawling has been actively considered as a generic and important tool for collecting uniform node samples so as to consistently estimate and uncover various characteristics of complex networks. The so-called simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH) algorithm have been popular in the literature for such unbiased graph sampling. However, an unavoidable downside of their core random walks -- slow diffusion over the space, can cause poor estimation accuracy. In this paper, we propose non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at almost no additional cost, not only unbiased graph sampling but also higher efficiency (smaller asymptotic variance of the resulting unbiased estimators) than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable feature of the MHDA is its applicability for any non-uniform node sampling like the MH algorithm, but ensuring better sampling efficiency than the MH algorithm. We also provide simulation results to confirm our theoretical findings. Title Uniform approximation of the distribution for the number of retransmissions of bounded documents Abstract Retransmission-based failure recovery represents a primary approach in existing communication networks, on all protocol layers, that guarantees data delivery in the presence of channel failures. Contrary to the traditional belief that the number of retransmissions is geometrically distributed, a new phenomenon was discovered recently, which shows that retransmissions can cause long (-tailed) delays and instabilities even if all traffic and network characteristics are light-tailed, e.g., exponential or Gaussian. Since the preceding finding holds under the assumption that data sizes have infinite support, in this paper we investigate the practically important case of bounded data units 0≤ L Title Don't let the negatives bring you down: sampling from streams of signed updates Abstract Random sampling has been proven time and time again to be a powerful tool for working with large data. Queries over the full dataset are replaced by approximate queries over the smaller (and hence easier to store and manipulate) sample. The sample constitutes a flexible summary that supports a wide class of queries. But in many applications, datasets are modified with time, and it is desirable to update samples without requiring access to the full underlying datasets. In this paper, we introduce and analyze novel techniques for sampling over dynamic data, modeled as a stream of modifications to weights associated with each key. While sampling schemes designed for stream applications can often readily accommodate positive updates to the dataset, much less is known for the case of negative updates, where weights are reduced or items deleted altogether. We primarily consider the turnstile model of streams, and extend classic schemes to incorporate negative updates. Perhaps surprisingly, the modifications to handle negative updates turn out to be natural and seamless extensions of the well-known positive update-only algorithms. We show that they produce unbiased estimators, and we relate their performance to the behavior of corresponding algorithms on insert-only streams with different parameters. A careful analysis is necessitated, in order to account for the fact that sampling choices for one key now depend on the choices made for other keys. In practice, our solutions turn out to be efficient and accurate. Compared to recent algorithms for L CCS Mathematics of computing Probability and statistics Statistical paradigms CCS Mathematics of computing Probability and statistics Stochastic processes CCS Mathematics of computing Probability and statistics Nonparametric statistics Title Practical collapsed variational bayes inference for hierarchical dirichlet process Abstract We propose a novel collapsed variational Bayes (CVB) inference for the hierarchical Dirichlet process (HDP). While the existing CVB inference for the HDP variant of latent Dirichlet allocation (LDA) is more complicated and harder to implement than that for LDA, the proposed algorithm is simple to implement, does not require variance counts to be maintained, does not need to set hyper-parameters, and has good predictive performance. Title Event processing grand challenges Abstract This tutorial discusses grand challenges for the event processing community. Title Speculation on the generality of the backward stepwise view of PCA Abstract A novel backwards viewpoint of Principal Component Analysis is proposed. In a wide variety of cases, that fall into the area of Object Oriented Data Analysis, this viewpoint is seen to provide much more natural and accessable analogs of PCA than the standard forward viewpoint. Examples considered here include principal curves, landmark based shape analysis, medial shape representation and trees as data. Title Nonparametric estimation of the precision-recall curve Abstract The Precision-Recall (PR) curve is a widely used visual tool to evaluate the performance of scoring functions in regards to their capacities to discriminate between two populations. The purpose of this paper is to examine both theoretical and practical issues related to the statistical estimation of PR curves based on classification data. Consistency and asymptotic normality of the empirical counterpart of the PR curve in sup norm are rigorously established. Eventually, the issue of building confidence bands in the PR space is considered and a specific resampling procedure based on a smoothed and truncated version of the empirical distribution of the data is promoted. Arguments of theoretical and computational nature are presented to explain why such a bootstrap is preferable to a "naive" bootstrap in this setup. Title Large-scale collaborative prediction using a nonparametric random effects model Abstract A nonparametric model is introduced that allows multiple related regression tasks to take inputs from a common data space. Traditional transfer learning models can be inappropriate if the dependence among the outputs cannot be fully resolved by known input-specific and task-specific predictors. The proposed model treats such output responses as conditionally independent, given known predictors and appropriate unobserved Title Nonparametric factor analysis with beta process priors Abstract We propose a nonparametric extension to the factor analysis problem using a beta process prior. This Title A framework for estimating complex probability density structures in data streams Abstract Probability density function estimation is a fundamental component in several stream mining tasks such as outlier detection and classification. The nonparametric adaptive kernel density estimate (AKDE) provides a robust and asymptotically consistent estimate for an arbitrary distribution. However, its extensive computational requirements make it difficult to apply this technique to the stream environment. This paper tackles the issue of developing efficient and asymptotically consistent AKDE over data streams while heeding the stringent constraints imposed by the stream environment. We propose the concept of local regions to effectively synopsize local density features, design a suite of algorithms to maintain the AKDE under a time-based sliding window, and analyze the estimates' asymptotic consistency and computational costs. In addition, extensive experiments were conducted with real-world and synthetic data sets to demonstrate the effectiveness and efficiency of our approach. Title Knowledge discovery of semantic relationships between words using nonparametric bayesian graph model Abstract We developed a model based on nonparametric Bayesian modeling for automatic discovery of semantic relationships between words taken from a corpus. It is aimed at discovering semantic knowledge about words in particular domains, which has become increasingly important with the growing use of text mining, information retrieval, and speech recognition. The subject-predicate structure is taken as a syntactic structure with the noun as the subject and the verb as the predicate. This structure is regarded as a graph structure. The generation of this graph can be modeled using the hierarchical Dirichlet process and the Pitman-Yor process. The probabilistic generative model we developed for this graph structure consists of subject-predicate structures extracted from a corpus. Evaluation of this model by measuring the performance of graph clustering based on WordNet similarities demonstrated that it outperforms other baseline models. Title Gaussian process product models for nonparametric nonstationarity Abstract Stationarity is often an unrealistic prior assumption for Gaussian process regression. One solution is to predefine an explicit nonstationary covariance function, but such covariance functions can be difficult to specify and require detailed prior knowledge of the nonstationarity. We propose the Gaussian process product model (GPPM) which models data as the pointwise product of two latent Gaussian processes to nonparametrically infer nonstationary variations of amplitude. This approach differs from other nonparametric approaches to covariance function inference in that it operates on the outputs rather than the inputs, resulting in a significant reduction in computational cost and required data for inference. We present an approximate inference scheme using Expectation Propagation. This variational approximation yields convenient GP hyperparameter selection and compact approximate predictive distributions. Title Tailoring density estimation via reproducing kernel moment matching Abstract Moment matching is a popular means of parametric density estimation. We extend this technique to nonparametric estimation of mixture models. Our approach works by embedding distributions into a reproducing kernel Hilbert space, and performing moment matching in that space. This allows us to tailor density estimators to a function class of interest (i.e., for which we would like to compute expectations). We show our density estimation approach is useful in applications such as message compression in graphical models, and image classification and retrieval. CCS Mathematics of computing Probability and statistics Distribution functions Title Learning poisson binomial distributions Abstract We consider a basic problem in unsupervised learning: learning an unknown We essentially settle the complexity of the learning problem for this basic class of distributions. As our main result we give a highly efficient algorithm which learns to ε-accuracy using O(1/ε Title Quantum rejection sampling Abstract Rejection sampling is a well-known method to sample from a target distribution, given the ability to sample from a given distribution. The method has been first formalized by von Neumann (1951) and has many applications in classical computing. We define a quantum analogue of rejection sampling: given a black box producing a coherent superposition of (possibly unknown) quantum states with some amplitudes, the problem is to prepare a coherent superposition of the same states, albeit with different target amplitudes. The main result of this paper is a tight characterization of the query complexity of this quantum state generation problem. We exhibit an algorithm, which we call quantum rejection sampling, and analyze its cost using semidefinite programming. Our proof of a matching lower bound is based on the automorphism principle which allows to symmetrize any algorithm over the automorphism group of the problem. Our main technical innovation is an extension of the automorphism principle to continuous groups that arise for quantum state generation problems where the oracle encodes unknown quantum states, instead of just classical data. Furthermore, we illustrate how quantum rejection sampling may be used as a primitive in designing quantum algorithms, by providing three different applications. We first show that it was implicitly used in the quantum algorithm for linear systems of equations by Harrow, Hassidim and Lloyd. Secondly, we show that it can be used to speed up the main step in the quantum Metropolis sampling algorithm by Temme Title Evaluation of the mean cycle time in stochastic discrete event dynamic systems Abstract We consider stochastic discrete event dynamic systems that have time evolution represented with two-dimensional state vectors through a vector equation that is linear in terms of an idempotent semiring. The state transitions are governed by second-order random matrices that are assumed to be independent and identically distributed. The problem of interest is to evaluate the mean growth rate of state vector, which is also referred to as the mean cycle time of the system, under various assumptions on the matrix entries. We give an overview of early results including a solution for systems determined by matrices with independent entries having a common exponential distribution. It is shown how to extend the result to the cases when the entries have different exponential distributions and when some of the entries are replaced by zero. Finally, the mean cycle time is calculated for systems with matrices that have one random entry, whereas the other entries in the matrices can be arbitrary nonnegative and zero constants. The random entry is always assumed to have exponential distribution except for one case of a matrix with zero row when the particular form of the matrix makes it possible to obtain a solution that does not rely on exponential distribution assumptions. Title Spatial probabilistic modeling of calls to businesses Abstract Local search engines allow users to search for entities such as businesses in a particular geographic location. To improve the geographic relevance of search, user feedback data such as logged click locations are traditionally used. In this paper, we use anonymized mobile call log data as an alternate source of data and investigate its relevance to local search. Such data consists of records of anonymized mobile calls made to local businesses along with the locations of celltowers that handled the calls. We model the probability of calls made to particular categories of businesses as a function of distance, using a generalized linear model framework. We provide a detailed comparison between a click log and a mobile call log, showing its relevance to local search. We describe our probabilistic models and apply them to anonymized mobile call logs for New York City and Los Angeles restaurants. Title An analysis of user behavior in online video streaming Abstract Understanding user behavior in online video streaming is essential to designing streaming systems which provide user-oriented service. However, it is challenging to gain insightful knowledge of the characteristics of user behavior due to its high volatility. To this end, the paper provides an extensive analysis of user behavior in online video streaming, based on a large scale trace database of online streaming video access sessions. We categorize user behaviors into multiple patterns and probe the relationship between them. Our work puts emphasis on the statistical characteristics of user behavior patterns. Particularly, this study uncovers that the behavior of one individual user in a video streaming session is not only related to the popularity level of the video, but also has strong correlation with the user's behaviors in previous streaming sessions. Title Generalized analysis of a distributed energy efficient algorithm for change detection Abstract An energy efficient distributed Change Detection scheme based on Page's CUSUM algorithm was presented in [2]. In this paper we consider a nonparametric version of this algorithm. In the algorithm in [2], each sensor runs CUSUM and transmits only when the CUSUM is above some threshold. The transmissions from the sensors are fused at the physical layer. The channel is modeled as a Multiple Access Channel (MAC) corrupted with noise. The fusion center performs another CUSUM to detect the change. In this paper, we generalize the algorithm to also include nonparametric CUSUM and provide a unified analysis. Title Piecewise-stationary bandit problems with side observations Abstract We consider a sequential decision problem where the rewards are generated by a piecewise-stationary distribution. However, the different reward distributions are unknown and may change at unknown instants. Our approach uses a limited number of side observations on past rewards, but does not require prior knowledge of the frequency of changes. In spite of the adversarial nature of the reward process, we provide an algorithm whose regret, with respect to the baseline with perfect knowledge of the distributions and the changes, is Title A method for clustering transient data streams Abstract This paper describes a novel method for clustering single and multi-dimensional data streams. With incremental computation of the incoming data, our method determines if the cluster formation should change from an initial cluster formation. Four main types of cluster evolutions are studied: cluster appearance, cluster disappearance, cluster splitting, and cluster merging. We present experimental results of our algorithms both in terms of scalability and cluster quality, compared with recent work in this area. Title Eliciting properties of probability distributions: the highlights Abstract We investigate the problem of incentivizing an expert to truthfully reveal probabilistic information about a random event. Probabilistic information consists of one or more properties, which are any real-valued functions of the distribution, such as the mean and variance. Not all properties can be elicited truthfully. We provide a simple characterization of elicitable properties, and describe the general form of the associated payment functions that induce truthful revelation. We then consider sets of properties, and observe that all properties can be inferred from sets of elicitable properties. This suggests the concept of elicitation complexity for a property, the size of the smallest set implying the property. Title Bayesian estimation of rule accuracy in UCS Abstract Learning Classifier Systems differ from many other classification techniques, in that new rules are constantly discovered and evaluated. This feature of LCS gives rise to an important problem, how to deal with estimates of rule accuracy that are unreliable due to the small number of performance samples available. In this paper we highlight the importance of this problem for LCS, summarise previous heuristic approaches to the problem, and propose instead the use of principles from Bayesian estimation. In particular we argue that discounting estimates of accuracy based on inexperience must be recognised as a crucially important part of the specification of LCS, and must be well motivated. We present experimental results on using the Bayesian approach to discounting, consider how to estimate the parameters for it, and identify benefits of its use for other areas of LCS. CCS Mathematics of computing Probability and statistics Multivariate statistics Title The cocktail party robot: sound source separation and localisation with an active binaural head Abstract Human-robot communication is often faced with the difficult problem of interpreting ambiguous auditory data. For example, the acoustic signals perceived by a humanoid with its on-board microphones contain a mix of sounds such as speech, music, electronic devices, all in the presence of attenuation and reverberations. In this paper we propose a novel method, based on a generative probabilistic model and on active binaural hearing, allowing a robot to robustly perform sound-source separation and localization. We show how interaural spectral cues can be used within a constrained mixture model specifically designed to capture the richness of the data gathered with two microphones mounted onto a human-like artificial head. We describe in detail a novel EM algorithm, we analyse its initialization, speed of convergence and complexity, and we assess its performance with both simulated and real data. Title A multivariate probabilistic method for comparing two clinical datasets Abstract We present a novel method for obtaining a concise and mathematically grounded description of multivariate differences between a pair of clinical datasets. Often data collected under similar circumstances reflect fundamentally different patterns. For example, information about patients undergoing similar treatments in different intensive care units (ICUs), or within the same ICU during different periods, may show systematically different outcomes. In such circumstances, the multivariate probability distributions induced by the datasets would differ in selected ways. To capture the probabilistic relationships, we learn a Bayesian network (BN) from the union of the two datasets. We include an indicator variable that represents the dataset from which a given patient record is obtained. We then extract the relevant conditional distributions from the network by finding the conditional probabilities that differ most when conditioning on the indicator variable. Our work is a form of explanation that bears some similarity to previous work on BN explanation; however, while previous work has mostly focused on justifying inference, our work is aimed at explaining multivariate differences between distributions. Title Decision forest for multivariate time series analysis Abstract Nowadays with time series accounting for an increasingly large fraction of world's supply of data, there has been an explosion of interest in mining time series data. This paper proposes a multivariate time series classification model which is both effective in classifier's accuracy and comprehensibility. It is composed of two stages: a supervised clustering for pattern extraction and soft discretization decision forest. In supervised clustering, some real time series instances from the training dataset will be selected as class dedicated patterns. While in decision forest, the rule induction helps to improve the knowledge acquisition of the classifier. In addition, soft discretization would further improve the accuracy and comprehensibility of the classifier. Title Fast PCA for processing calcium-imaging data from the brain of drosophila melanogaster Abstract The calcium-imaging technique allows us to record movies of brain activity in the antennal lobe of the fruitfly Title Analyzing tropical cyclone tracks of North Indian Ocean Abstract Cyclones are regarded as one of the most dangerous meteorological phenomena of the tropical region. The probability of landfall of a tropical cyclone depends on its movement (trajectory). Analysis of trajectories of tropical cyclones could be useful for identifying potentially predictable characteristics. In this study, tropical cyclone tracks over the North Indian Ocean basin have been analyzed and grouped into clusters based on their spatial characteristics. For the identified clusters we have also examined characteristics such as life span, maximum sustained wind speed, landfall, seasonality. The resultant clusters are forming clear groupings on some of these characteristics. The cyclones with higher maximum wind speed and longest life span are grouped in to one cluster. Another cluster includes short duration cyclonic events that are mostly deep depressions and significant for rainfall over Eastern and Central India. The clustering approach is likely to prove useful for analysis of events of significance with regard to impacts. Title A theoretical framework for interaction measure and sensitivity analysis in cross-layer design Abstract Cross-layer design has become one of the most effective and efficient methods to provide Quality of Service (QoS) over various communication networks, especially over wireless multimedia networks. However, current research on cross-layer design has been carried out in various piecemeal approaches, and lacks a methodological foundation to gain in-depth understanding of complex cross-layer behaviors such as multiscale temporal-spatial behavior, leading to a design paradox and/or unmanageable design problems. In this article, we propose a theoretical framework for quantitative interaction measures, which is further extended to sensitivity analysis by quantifying the contribution made by each design variable and by the interactions among them on the design objective. Thus, the proposed framework can significantly enhance our capability for cross-layer behavior characterization and provide design insights for future design. Furthermore, a case study on cross-layer optimized wireless multimedia communications has been adopted to illustrate major cross-layer design trade-offs and validate the proposed framework. Both analytical and experimental results show the correctness and effectiveness of the proposed framework. Title A combined PCA-MLP model for predicting stock index Abstract Predicting stock prices is a challenging and daunting task due to the complexity of the stock market. In this study, a combined model is proposed to explore market tendency. Prediction of daily closing price using the variables daily opening price, high, low and volume of transaction is done. In this approach, the predictor variables are multi collinear in nature which is overcome by using Principal Component Analysis (PCA) which resulted in a new set of independent variables that are taken for predicting the stock prices using Multilayer Layer Perceptron (MLP) model. To evaluate the prediction ability of the model, we compare the performance of models using a common error measure. The empirical results reveal that the proposed approach is a promising alternate to stock market prediction. Title Efficiently learning mixtures of two Gaussians Abstract Given data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately estimate the mixture parameters. We provide a polynomial-time algorithm for this problem for the case of two Gaussians in $n$ dimensions (even if they overlap), with provably minimal assumptions on the Gaussians, and polynomial data requirements. In statistical terms, our estimator converges at an inverse polynomial rate, and no such estimator (even exponential time) was known for this problem (even in one dimension). Our algorithm reduces the n-dimensional problem to the one-dimensional problem, where the As a corollary, we can efficiently perform near-optimal clustering: in the case where the overlap between the Gaussians is small, one can accurately cluster the data, and when the Gaussians have partial overlap, one can still accurately cluster those data points which are not in the overlap region. A second consequence is a polynomial-time density estimation algorithm for arbitrary mixtures of two Gaussians, generalizing previous work on axis-aligned Gaussians (Feldman {\em et al}, 2006). Title Joint attention, joint probability Abstract This paper presents a novel probabilistic approach to joint attention. Joint attention is a communicative activity that allows communicators to share perceptual experience by attending on the same visual object. This communicative activity is conceptualized as a conditional probability over jointly given test and cue stimuli. To formalize the joint attention with mathematical terms, our approach starts from a simple decision task in which a response of subjects is determined by a test stimulus only. Our approach extends to an attentional cueing task in which subjects make a decision on a test that are jointly given with a cue stimuli. The joint relationship between test and cue stimuli yields attentional cueing effects -- faster and more accurate response if the two stimuli are consistent and slower and less accurate response if not. With our model, a series of simulations were carried out to show interesting properties of the model that can not be captured using a test stimulus alone. The model successfully locates a visual object guided by a cue stimulus such as color and pointing gesture. These results indicate that joint attention can be considered as a cooperative decision process on a visual object among many objects with a referential cue driven from a communicator. Title Persistent cohomology and circular coordinates Abstract Nonlinear dimensionality reduction (NLDR) algorithms such as Isomap, LLE and Laplacian Eigenmaps address the problem of representing high-dimensional nonlinear data in terms of low-dimensional coordinates which represent the intrinsic structure of the data. This paradigm incorporates the assumption that real-valued coordinates provide a rich enough class of functions to represent the data faithfully and efficiently. On the other hand, there are simple structures which challenge this assumption: the circle, for example, is one-dimensional but its faithful representation requires two real coordinates. In this work, we present a strategy for constructing circle-valued functions on a statistical data set. We develop a machinery of persistent cohomology to identify candidates for significant circle-structures in the data, and we use harmonic smoothing and integration to obtain the circle-valued coordinate functions themselves. We suggest that this enriched class of coordinate functions permits a precise NLDR analysis of a broader range of realistic data sets. CCS Mathematics of computing Mathematical software Solvers CCS Mathematics of computing Mathematical software Statistical software Title Probabilistic models for concurrent chatting activity recognition Abstract Recognition of chatting activities in social interactions is useful for constructing human social networks. However, the existence of multiple people involved in multiple dialogues presents special challenges. To model the conversational dynamics of concurrent chatting behaviors, this article advocates Factorial Conditional Random Fields (FCRFs) as a model to accommodate co-temporal relationships among multiple activity states. In addition, to avoid the use of inefficient Loopy Belief Propagation (LBP) algorithm, we propose using Iterative Classification Algorithm (ICA) as the inference method for FCRFs. We designed experiments to compare our FCRFs model with two dynamic probabilistic models, Parallel Condition Random Fields (PCRFs) and Hidden Markov Models (HMMs), in learning and decoding based on auditory data. The experimental results show that FCRFs outperform PCRFs and HMMs-like models. We also discover that FCRFs using the ICA inference approach not only improves the recognition accuracy but also takes significantly less time than the LBP inference method. Title An integrated framework for enabling effective data collection and statistical analysis with ns-2 Abstract Title eyePatterns: software for identifying patterns and similarities across fixation sequences Abstract Title Graphical interactive simulation input modeling with bivariate Bézier distributions Abstract Title Algorithm 727: Quantile estimation using overlapping batch statistics Abstract Title Corrigendum: Algorithm 725: Computation of the multivariate normal integral Abstract Title Computation of the multivariate normal integral Abstract Title Looking for Mr. X bar: supporting statistical computing in the personal computer era Abstract Title Algorithm 615: the best subset of parameters in leasst absolute value regression Abstract Title Remark on algorithm 590.: Fortran subroutines for computing deflating subspaces with specified spectrum Abstract CCS Mathematics of computing Mathematical software Mathematical software performance CCS Mathematics of computing Information theory Coding theory Title Querying RDF dictionaries in compressed space Abstract The use of dictionaries is a common practice among those applications performing on huge RDF datasets. It allows long terms occurring in the RDF triples to be replaced by short IDs which reference them. This decision greatly compacts the dataset and mitigates the scalability issues underlying to its management. However, the dictionary size is not negligible and the techniques used for its representation also suffer from scalability limitations. This paper focuses on this scenario by adapting compression techniques for string dictionaries to the case of RDF. We propose a novel technique: Title Interactive information complexity Abstract The primary goal of this paper is to define and study the interactive information complexity of functions. Let f(x,y) be a function, and suppose Alice is given x and Bob is given y. Informally, the interactive information complexity IC(f) of f is the least amount of information Alice and Bob need to reveal to each other to compute f. Previously, information complexity has been defined with respect to a prior distribution on the input pairs (x,y). Our first goal is to give a definition that is independent of the prior distribution. We show that several possible definitions are essentially equivalent. We establish some basic properties of the interactive information complexity IC(f). In particular, we show that IC(f) is equal to the amortized (randomized) communication complexity of f. We also show a direct sum theorem for IC(f) and give the first general connection between information complexity and (non-amortized) communication complexity. This connection implies that a non-trivial exchange of information is required when solving problems that have non-trivial communication complexity. We explore the information complexity of two specific problems - Equality and Disjointness. We show that only a constant amount of information needs to be exchanged when solving Equality with no errors, while solving Disjointness with a constant error probability requires the parties to reveal a linear amount of information to each other. Title Folded codes from function field towers and improved optimal rate list decoding Abstract We give a new construction of algebraic codes which are efficiently list decodable from a fraction 1-R-ε of adversarial errors where R is the rate of the code, for any desired positive constant ε. The worst-case list size output by the algorithm is O(1/ε), matching the existential bound for random codes up to constant factors. Further, the alphabet size of the codes is a constant depending only on ε --- it can be made exp(~O(1/ε In comparison, algebraic codes achieving the optimal trade-off between list decodability and rate based on folded Reed-Solomon codes have a decoding complexity of N Title Word-based self-indexes for natural language text Abstract The inverted index supports efficient full-text searches on natural language text collections. It requires some extra space over the compressed text that can be traded for search speed. It is usually fast for single-word searches, yet phrase searches require more expensive intersections. In this article we introduce a different kind of index. It replaces the text using essentially the same space required by the compressed text alone (compression ratio around 35%). Within this space it supports not only decompression of arbitrary passages, but efficient word and phrase searches. Searches are orders of magnitude faster than those over inverted indexes when looking for phrases, and still faster on single-word searches when little space is available. Our new indexes are particularly fast at We adapt Title Bounds on locally testable codes with unique tests Abstract The The computational complexity notion of a PCP is closely related to the combinatorial notion of a In light of the strong connection between PCPs and LTCs, one may conjecture the existence of LTCs with properties similar to the ones required by the UGC. In this work we show limitations on such LTCs: We consider 2-query LTCs with codeword testers that only make unique tests. Roughly speaking, we show that any such LTC with relative distance close to 1, almost-perfect completeness and low-soundness, is of constant size. While our result does not imply anything about the correctness of the UGC, it does show some limitations of unique tests, compared, for example, to projection tests. Title On iterative compressed sensing reconstruction of sparse non-negative vectors Abstract We consider the iterative reconstruction of the Compressed Sensing (CS) problem over reals. The iterative reconstruction allows interpretation as a channel-coding problem, and it guarantees perfect reconstruction for properly chosen measurement matrices and sufficiently sparse error vectors. In this paper, we give a summary on reconstruction algorithms for compressed sensing and examine how the iterative reconstruction performs on quasi-cyclic low-density parity check (QC-LDPC) measurement matrices. Title High-rate codes with sublinear-time decoding Abstract Locally decodable codes are error-correcting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding algorithms has been well studied, and it has widely been suspected that nontrivial locality must come at the price of low rate. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(k In this paper we construct a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every ε > 0 and α > 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 - α, and is locally decodable from a constant fraction of errors using O(k These codes, which we call multiplicity codes, are based on evaluating high degree multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial based codes; they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in their rate and distance. Title From low-distortion norm embeddings to explicit uncertainty relations and efficient information locking Abstract Quantum uncertainty relations are at the heart of many quantum cryptographic protocols performing classically impossible tasks. One operational manifestation of these uncertainty relations is a purely quantum effect referred to as information locking. A locking scheme can be viewed as a cryptographic protocol in which a uniformly random n-bit message is encoded in a quantum system using a classical key of size much smaller than n. Without the key, no measurement of this quantum state can extract more than a negligible amount of information about the message (the message is "locked"). Furthermore, knowing the key, it is possible to recover (or "unlock") the message. In this paper, we make the following contributions by exploiting a connection between uncertainty relations and low-distortion embeddings of L2 into L1. * We introduce the notion of metric uncertainty relations and connect it to low-distortion embeddings of L2 into L1. A metric uncertainty relation also implies an entropic uncertainty relation. * We prove that random bases satisfy uncertainty relations with a stronger definition and better parameters than previously known. Our proof is also considerably simpler than earlier proofs. We apply this result to show the existence of locking schemes with key size independent of the message length. * We give efficient constructions of bases satisfying metric uncertainty relations. These bases are computable by quantum circuits of almost linear size. This leads to the first explicit construction of a strong information locking scheme. Moreover, we present a locking scheme that can in principle be implemented with current technology. These constructions are obtained by adapting an explicit norm embedding due to Indyk (2007) and an extractor construction of Guruswami, Umans and Vadhan (2009). * We apply our metric uncertainty relations to give communication protocols that perform equality-testing of n-qubit states. We prove that this task can be performed by a single message protocol using O(log(1/e)) qubits and n bits of communication, where e is an error parameter. We also give a single message protocol that uses O(log^2 n) qubits, where the computation of the sender is efficient. Title Calculating bounds on information leakage using two-bit patterns Abstract Theories of quantitative information flow have seen growing interest recently, in view of the fundamental importance of controlling the leakage of confidential information, together with the pragmatic necessity of tolerating intuitively "small" leaks. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. In this paper, we address this question in the context of deterministic imperative programs and under the recently-proposed min-entropy measure of information leakage, which measures leakage in terms of the confidential information's vulnerability to being guessed in one try by an adversary. In this context, calculating the maximum leakage of a program reduces to counting the number of feasible outputs that it can produce. We approach this task by determining patterns among pairs of bits in the output, for instance by determining that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of feasible outputs and hence on the leakage. We explore the effectiveness of our approach on a number of case studies, in terms of both efficiency and accuracy. Title Application of hybrid decoder of turbo code for wireless communication Abstract Since 1993, turbo codes and turbo product codes (TPC) have been widely studied, and adopted in several communication systems. The iterative decoding of TPCs is traditionally performed using several soft-input/soft-output (SISO) decoders whose computational complexity can be considerable. For this reason, the complexity reduction of SISO decoders has remained an attractive research topic. In 2009 Dweik and Sharif had proposed a novel 'hybrid' iterative TPC decoder using both HIHO and SISO constituent decoders. Compared to a classical TPC decoder only relying on SISO decoders, such hybrid decoder can offer a reduced complexity and/or BER performance improvement. Furthermore, the hybrid decoder provides a high degree of design flexibility, thus allowing for an optimization of the performance/complexity tradeoff. In this paper we propose an idea, where we can apply this decoder for different wireless communication systems such as 802.16, forward error correction (FEC) and compare its performance with the existing system. CCS Mathematics of computing Mathematical analysis Numerical analysis CCS Mathematics of computing Mathematical analysis Mathematical optimization CCS Mathematics of computing Mathematical analysis Differential equations CCS Mathematics of computing Mathematical analysis Calculus CCS Mathematics of computing Mathematical analysis Functional analysis CCS Mathematics of computing Mathematical analysis Integral equations Title Mahler measures, short walks and log-sine integrals: a case study in hybrid computation Abstract The Mahler measure of a polynomial of several variables has been a subject of much study over the past thirty years. Very few closed forms are proven but many more are conjectured. Title Integration in finite terms of non-liouvillian functions Abstract Title VarInt: variational integrator design with maple Abstract Geometric numerical integration refers to a class of numerical integration algorithms that preserve the differential geometric structure that defines the evolution of dynamical systems. For simulations over relatively short time spans, as compared to the intrinsic time scales, standard (non-geometric) integrators are often advantageous, as they include adaptive and multistep methods, which can be both accurate and fast. Extended computations require a different, geometric approach, as non-geometric methods tend to generate or dissipate energy artificially due to the fact that they do not respect the fundamental geometry of the phase flow, which means that at some point the errors dominate. It is common to create geometric numerical integrators based on either previous knowledge of classes of non-geometric numerical integrators, from which specific instances can be derived that are geometric, or truncated series solutions to the Hamilton--Jacobi equation for (canonical) transformations near the identity. Unfortunately, the actual calculations for higher-order geometric numerical algorithms are generally quite involved. There is, however, an alternative strategy that avoids many of the difficulties inherent in the design of higher-order versions of these geometric numerical integrators. It relies on the discretization of the action functional, from which one derives the numerical algorithms in a straightforward manner. These so-called variational integrators preserve the differential geometric structure automatically; all conserved quantities are preserved infinitesimally, too. An easy-to-use and freely available package named VarInt is presented that enables one to generate and analyse new variational integrators systematically to arbitrary order with Maple. All VarInt requires from the user is the action and a quadrature formula to approximate it. One can either select one of the built-in quadrature rules, or one supplies it manually. With VarInt it is now possible to venture beyond the standard geometric numerical integrators, without the need for an advanced appreciation of the mathematical details. Several numerical examples, obtained with a basic numerical analysis tool in VarInt, demonstrate the superior performance of new variational integrators for certain classes of dynamical systems. VarInt is ideally suited for researchers and engineers who wish to design, study, test and/or analyse new geometric numerical integration algorithms without the hassle of laborious computations. All algorithms can be tuned to the required level of specificness of the problem at hand due to Maples symbolic capabilities and its code optimization procedures. In addition, VarInt can be of value in the classroom, as a tool to assist in increasing the understanding of variational integrators. Title A maple package for integro-differential operators and boundary problems Abstract Title An implementation of the method of brackets for symbolic integration Abstract In spite of being a classical problem, the current techniques available for Symbolic Integration are not sufficient to evaluate a variety of integrals coming from Mathematical Physics, such as Bessel functions. The Method of Brackets [2, 3], a heuristic process appearing in the evaluation of Feynman diagrams, can be used to evaluate symbolically a large class of single or multiple integrals. It represents an extension of the so-called Ramanujan Master Theorem [1]. The first implementation of the Method of Brackets has been written by the author in the open-source computer algebra system Sage. This implementation allows experimentation with representations of the integrand, which can affect output and efficiency. An algorithm that chooses the best representation of the integrand is being developed. Title A symbolic summation approach to feynman integrals Abstract Title Solving integrals with the quantum computer algebra system Abstract Title Algorithm 876: Solving Fredholm Integral Equations of the Second Kind in Matlab Abstract We present here the algorithms and user interface of a Title Stochastic integral equation solver for efficient variation-aware interconnect extraction Abstract In this paper we present an efficient algorithm for extracting the complete statistical distribution of the input impedance of interconnect structures in the presence of a large number of random geometrical variations. The main contribution in this paper is the development of a new algorithm, which combines both Neumann expansion and Hermite expansion, to accurately and efficiently solve stochastic linear system of equations. The second contribution is a new theorem to efficiently obtain the coefficients of the Hermite expansion while computing only low order integrals. We establish the accuracy of the proposed algorithm by solving stochastic linear systems resulting from the discretization of the stochastic volume integral equation and comparing our results to those obtained from other techniques available in the literature, such as Monte Carlo and stochastic finite element analysis. We further prove the computational efficiency of our algorithm by solving large problems that are not solvable using the current state of the art. Title Multi-objective circuit partitioning for cutsize and path-based delay minimization Abstract CCS Mathematics of computing Mathematical analysis Nonlinear equations Title Network synchronization and localization based on stolen signals Abstract We consider an anchor-free, relative localization and synchronization problem where a set of Title A study of Hensel series in general case Abstract The Hensel series is a series expansion of multivariate algebraic function at its singular point. The Hensel series is computed by the (extended) Hensel construction, and it is expressed in a well-structured form. In previous papers, we clarified theoretically various interesting properties of Hensel series in restricted cases. In this paper, we present a theory of Hensel series in general case. In particular, we investigate the Hensel series arising from non-squarefree initial factor, and derive a formula which shows "fine structure" of the Hensel series. If we trace a Hensel series along a path passing a divergence domain, the Hensel series often jumps from one branch of the algebraic function to another. We investigate the jumping phenomenon near the ramification point, which has not been clarified in our previous papers. Title Numerical calculation of H-bases for positive dimensional varieties Abstract A symbolic-numeric method for calculating an H-basis for the ideal of a positive dimensional complex affine algebraic variety, possibly defined numerically, is given. H-bases for ideals Applications include factoring multivariable polynomials, analyzing singular curves, finding equations for the union of varieties, and, most importantly, finding equations for components of reducible varieties given numerically. Title Numerical stability of barycentric Hermite root-finding Abstract Computing the roots of a polynomial expressed in the Lagrange basis or a Hermite interpolational basis can be reduced to computing the eigenvalues of the corresponding companion matrix [2]. The result we present here is that roots of a polynomial computed via this method are exactly the roots of a polynomial with slightly perturbed coefficients. Title Accelerate TV-L1 optical flow with edge-based image decomposition and its implementation on mobile phone Abstract Variational methods are among the most accurate techniques of optical flow computation. TV- Title A breakthrough in algorithm design Abstract Computer scientists at Carnegie Mellon University have devised an algorithm that might be able to solve a certain class of linear systems much more quickly than today's fastest solvers. Title Solving bivariate polynomial systems on a GPU Abstract Title Root lifting techniques and applications to list decoding Abstract Motivatived by Guruswami and Rudra's construction of folded Reed-Solomon codes, we give algorithms to solve functional equations of the form Q(x, f(x), f(x)) = 0, where Q is a trivariate polynomial. We compare two approaches, one based on Newton's iteration and the second using relaxed series techniques. Title A stamina-aware sightseeing tour scheduling method Abstract In general, a tour schedule is composed of multiple sightseeing spots taking into account the user' s preferences. However, during the tour, the stamina of the tourists may be exhausted. In this paper, we propose a sightseeing scheduling method that maximizes the degree of user satisfaction taking stamina into account. In our method, break times are allocated in the schedule to satisfy the stamina constraint. Since this problem implies a TSP and thus is NP-hard, it is difficult to solve in practical time. To calculate a semi-optimal solution in practical time, we propose a method that first composes a schedule visiting multiple sightseeing spots without considering stamina, and then, to recover stamina, allocates break times, based on a predatory search technique. To evaluate the proposed method, we compared our method through a simulation experiment with some conventional methods including a brute-force method. As a result, the proposed method composed a schedule in practical time whose expected degree of satisfaction was near the optimum. Title Deflation and certified isolation of singular zeros of polynomial systems Abstract We develop a new symbolic-numeric algorithm for the certification of singular isolated points, using their associated local ring structure and certified numerical computations. An improvement of an existing method to compute inverse systems is presented, which avoids redundant computation and reduces the size of the intermediate linear systems to solve. We derive a one-step deflation technique, from the description of the multiplicity structure in terms of differentials. The deflated system can be used in Newton-based iterative schemes with quadratic convergence. Starting from a polynomial system and a sufficiently small neighborhood, we obtain a criterion for the existence and uniqueness of a singular root of a given multiplicity structure, applying a well-chosen symbolic perturbation. Standard verification methods, based e.g. on interval arithmetic and a fixed point theorem, are employed to certify that there exists a unique perturbed system with a singular root in the domain. Applications to topological degree computation and to the analysis of real branches of an implicit curve illustrate the method. CCS Mathematics of computing Mathematical analysis Quadrature Title Corrigendum: Algorithm 902: GPOPS, a MATLAB software for solving multiple-phase optimal control problems using the gauss pseudospectral method Abstract An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Title A particle-spring approach to geometric constraints solving Abstract Current iterative numerical methods, such as continuation or Newton-Raphson, work only on systems for which the corresponding matrix is a square one. The geometric constraint systems need thus either to have no degrees of freedom, or to be a system the software can anchor, In this article, we propose a new iterative numerical approach which can handle both rigid and under-rigid geometric constraint systems. It is based on the translation of the system under the form of a particle-spring system where particles correspond to the geometric entities and springs to the constraints. We show that consistently over-constrained systems are also solved. We show that our approach is promising by giving results of a prototype implementation. We propose tracks for enhancements of the approach which could tackle its drawbacks (mainly stability). Title Invariants and symbolic calculations in the theory of quadratic differential systems Abstract While quadratic differential systems intervene in many areas of applied mathematics and they also have theoretical importance, the topological classification of this class remains an extremely hard problem. However, in recent years much progress has been achieved due to the use of computer algebra and numerical calculations for obtaining complete classifications of some families of quadratic systems by effectively computing polynomial invariants and by an interplay between computer algebra and numerical computations. We illustrate on a specific family how these techniques yield the complete classification of the family within the 12-dimensional space of the coefficients of the systems. Title Algorithm 906: elrint3d—A Three-Dimensional Nonadaptive Automatic Cubature Routine Using a Sequence of Embedded Lattice Rules Abstract A three-dimensional automatic cubature routine, called Title Algorithm 902: GPOPS, A MATLAB software for solving multiple-phase optimal control problems using the gauss pseudospectral method Abstract An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Title Interdisciplinary applications of mathematical modeling Abstract We demonstrate applications of numerical integration and visualization algorithms in diverse fields including psychological modeling (biometrics); in high energy physics for the study of collisions of elementary particles; and in medical physics for regulating the dosage of proton beam radiation therapy. We discuss the problems and solution methods, as supported by numerical results. Title Beautiful differentiation Abstract Automatic differentiation (AD) is a precise, efficient, and convenient method for computing derivatives of functions. Its forward-mode implementation can be quite simple even when extended to compute all of the higher-order derivatives as well. The higher-dimensional case has also been tackled, though with extra complexity. This paper develops an implementation of higher-dimensional, higher-order, forward-mode AD in the extremely general and elegant setting of Title Algorithm 882: Near-Best Fixed Pole Rational Interpolation with Applications in Spectral Methods Abstract We present a numerical procedure to compute the nodes and weights in rational Gauss-Chebyshev quadrature formulas. Under certain conditions on the poles, these nodes are near best for rational interpolation with prescribed poles (in the same sense that Chebyshev points are near best for polynomial interpolation). As an illustration, we use these interpolation points to solve a differential equation with an interior boundary layer using a rational spectral method. The algorithm to compute the interpolation points (and, if required, the quadrature weights) is implemented as a Matlab program. Title Geometrically adaptive numerical integration Abstract Numerical integration over solid domains often requires geometric adaptation to the solid's boundary. Traditional approaches employ hierarchical adaptive space decomposition, where the integration cells intersecting the boundary are either included or discarded based on their position with respect to the boundary and/or statistical measures. These techniques are inadequate when accurate integration near the boundary is particularly important. In boundary value problems, for instance, a small error in the boundary cells can lead to a large error in the computed field distribution. We propose a novel technique for exploiting the exact local geometry in boundary cells. A classification system similar to marching cubes is combined with a suitable parameterization of the boundary cell's geometry. We can then allocate integration points in boundary cells using the exact geometry instead of relying on statistical techniques. We show that the proposed geometrically adaptive integration technique yields greater accuracy with fewer integration points than previous techniques. Title Efficient Gauss-related quadrature for two classes of logarithmic weight functions Abstract Integrals with logarithmic singularities are often difficult to evaluate by numerical methods. In this work, a quadrature method is developed that allows the exact evaluation (up to machine accuracy) of integrals of polynomials with two general types of logarithmic weights. The total work for the determination of This quadrature method can then be used to generate the nonclassical orthogonal polynomials for weight functions containing logarithms and obtain Gauss and Gauss-related quadratures for these weights. Two algorithms for each of the two types of logarithmic weights that incorporate these methods are given in this paper. CCS Mathematics of computing Continuous mathematics Calculus CCS Mathematics of computing Continuous mathematics Topology CCS Mathematics of computing Continuous mathematics Continuous functions CCS Information systems Data management systems Database design and models CCS Information systems Data management systems Data structures CCS Information systems Data management systems Database management system engines CCS Information systems Data management systems Query languages CCS Information systems Data management systems Database administration CCS Information systems Data management systems Information integration CCS Information systems Data management systems Middleware for databases CCS Information systems Information storage systems Information storage technologies CCS Information systems Information storage systems Record storage systems CCS Information systems Information storage systems Storage replication CCS Information systems Information storage systems Storage architectures CCS Information systems Information storage systems Storage management CCS Information systems Information systems applications Enterprise information systems CCS Information systems Information systems applications Collaborative and social computing systems and tools CCS Information systems Information systems applications Spatial-temporal systems CCS Information systems Information systems applications Decision support systems CCS Information systems Information systems applications Mobile information processing systems CCS Information systems Information systems applications Process control systems Title Dynamic tuning of feature set in highly variant interactive applications Abstract For important classes of interactive consumer applications, such as gaming and video, the Quality-of-Service requirement is to create a maximally immersive experience for the interactive user. This necessitates a trade-off between maximizing the computational complexity of application features versus the need to maintain a smooth and sufficiently high frame-rate. The implementation of these applications using conventional C/C++/Java development flows, their highly data-dependent time-varying nature, and the lack of analytical models for their execution time behavior pose unique challenges in obtaining significant QoS improvements. In this paper, we propose an adaptive feedback controller that dynamically tunes the application feature set in the face of the challenges outlined above. We use a system-identification strategy where the controller estimates an application's execution characteristics based on Title Implementation of model predictive control with modified minimal model on low-power RISC microcontrollers Abstract Due to the ability of modeling multivariable systems and handling constraints in the control framework, model predictive control (MPC) has received a lot of interest from both academic and industrial communities. Although it is an established control technique, implementing MPC on small-scale devices is a challenge since we need to handle complicated issues of the control framework using limited computational power and hardware resources. This paper presents our implementation of MPC with constraints on the Texas Instruments MSP430 16-bit microcontroller platform. The MPC operational constraints which are supported in our design include rate of change, amplitude and output constraints, while the associated optimization problem is solved using a primal-dual interior-point algorithm based on predicator-corrector method. Our implementation is demonstrated in a prototype of a real-time close-loop blood glucose regulation system using a modification of the minimal model. Experimental results show that our system is able to achieve desired diabetes management, and the chosen microprocessor is capable of performing the MPC algorithm accurately with high energy-efficiency and in real-time. Title Adaptive feed-forward and feedback control using neural networks for oxygen ratio in fuel cell stacks Abstract Automatic control of fuel cell stacks (FCS) using non-adaptive and adaptive radial basis function (RBF) neural network methods are investigated in this paper. The neural network inverse model is used to estimate the compressor voltage for fuel cell stack control at different current demands and 30% reduction in the compressor gain in order to prevent the oxygen starvation. A PID controller is used in the feedback to adjust the difference between the requested and the actual oxygen ratio by compensating the neural network inverse model output. Furthermore, the RBF inverse model is made adaptive to cope with the significant parameter uncertainty, disturbances and environment changes. Simulation results show the effectiveness of the adaptive control strategy. Title Process performance management: illuminating design issues through a systematic problem analysis Abstract Business processes are the means by which organizations create value. Consequently, organizations need to continuously monitor and control their processes' performance so as to provide a consistent and predictable execution quality. A number of today's organizations, however, appear to encounter difficulties with measuring and improving their processes' performance. In this paper, we set out to identify the gap between how organizations currently approach process performance management (PPM) and what they are striving to realize in the future. The systematic gap analysis results in a set of design factors that are valuable in guiding future design efforts for useful and relevant PPM solutions. Title An augmented reality learning space for PC DIY Abstract Because of the advances of computer hardware and software, Computer Aided Instruction (CAI) makes learning effective and interesting through the use of interactive multimedia technology. Recently, Augmented Reality (AR) technology has begun to surge as a new CAI tool because of its ability to create tangible and highly interactive user interface. In addition, recent studies have shown that the learning content as well as the participation of learners in learning activities can greatly affect learners' learning performance. However, studies of the integration of PC DIY (Personal Computer Do It Yourself) learning with AR technology are still few in current literature. Therefore, this study proposes an AR learning space for PC DIY whose system architecture and implementation are detailed. To evaluate the usability of the proposed system, a questionnaire is given to twenty-six graduate students after their hands-on experience with the prototype. Results of the questionnaire show the proposed AR learning space for PC DIY offers students a motivating, pleasant, and satisfying learning experience. Limitation, conclusion and future studies are given. Title Improving the reliability of embedded systems as complexity increases: supporting the migration between event-triggered and time-triggered software architectures Abstract We can divide the software architectures employed in embedded systems into two categories - time-triggered (TT) and event-triggered (ET) - based on the way in which the various systems tasks are initiated. ET architectures are suitable for use with small systems of limited complexity: as systems grow, it may be necessary to migrate the existing code to a TT architecture. This paper is concerned with techniques which may be used to support the migration between ET and TT architectures. Title Safety and security in industrial control Abstract We present a view on the system security, which draws from the previous experiences in dealing with system safety. This survey paper focuses on exploring the commonalities between safety and security with both treated as mutually complementary view of the same problem: security as protecting a computer system against the threats of the external environment, and safety as protecting the environment from potential dangers of a computer system. Mutual relationships of safety and security are discussed. Title What is the statistical method for at-speed testing? Abstract Manufacturing testing becomes increasingly difficult in the nanometer manufacturing region because of the impacts of process variation on path delays. It has been frequently observed from manufacturing testing that different chips exhibit different speed limiting paths; and different set of paths may fail to meet the timing specification for different chips. Title Modifying Erlang B table based upon data mining Abstract During the process of mobile network planning and optimization, capacity predication is one of the important contents. Accurate capacity results will not only improve the running quality of the network, but also will improve the satisfaction of the users. Traditionally, Erlang B formula was used to forecast the capacity of network. The forecasted results were quite suitable for the wire network. When the same method was used in wireless voice traffic predication, the results are inaccurate for the difference between wire and wireless channels. To precisely forecast the traffic capacity of a wireless network, the Elang B formula should be modified to fit the characteristics of wireless systems. In this paper, using data cleansing based on clustering algorithm, the algorithms and methods of data mining, the modified Erlang B table including system design, process flow was implemented. The field test was given to show the comparison between theoretic and modified results. Title Metrics for co-evolving autonomous systems Abstract Autonomous system innovations have overrun the test and evaluation capability to find problems before they become expensive to fix --- or lethal. The autonomy paradigm demands that an equivalent test and evaluation system be conceived, architected and engineered, operated and evolved. This in turn demands an autonomous test and evaluation enterprise, staffed with competent systemists, as the enabling agent. This paper outlines the metrics and key capabilities for realizing such an enterprise. It features a game-theoretic basis, a model-based systems engineering approach and a four part strategic framework. This paper focuses on the unclassified situation in the U.S. Dept. of Defense. However, these ideas will apply to other domains of autonomy in both the public and private sectors. CCS Information systems Information systems applications Multimedia information systems CCS Information systems Information systems applications Data mining CCS Information systems Information systems applications Digital libraries and archives Title Uffizi touch®: a new experience with art Abstract Centrica (www.centrica.it) has developed Uffizi Touch®, Title Metadata visualization of scholarly search results: supporting exploration and discovery Abstract Studies of online search behaviour have found that searchers often face difficulties formulating queries and exploring the search results sets. These shortcomings may be especially problematic in digital libraries since library searchers employ a wide variety of information seeking methods (with varying degrees of support), and the corpus to be searched is often more complex than simple textual information. This paper presents Bow Tie Academic Search, an interactive Web-based academic library search interface aimed at supporting the strategic retrieval behaviour of searchers. In this system, a histogram of the most frequently used keywords in the top search results is provided, along with a compact visual encoding that represents document similarities based on the co-use of keywords. In addition, the list-based representation of the search results is enhanced with visual representations of citation information for each search result. A detailed view of this citation information is provided when a particular search result is selected. These tools are designed to provide visual and interactive support for query refinement, search results exploration, and citation navigation, making extensive use of the metadata provided by the underlying academic information retrieval system. Title Addressing the long tail in empirical research data management Abstract At present, efforts are being made to pick up research data as bibliographic artifacts for re-use, transparency and citation. When approaching research data management solutions, it is imperative to consider carefully how filed data can be retrieved and accessed again on the user side. In the field of economics, a large amount of research is based on empirical data, which is often combined from several sources such as data centers, affiliated institutes or self-conducted surveys. Respecting this practice, we motivate and elaborate on techniques for fine-grained referencing of data fragments as to avoid multiple copies of same data archived over and over again, which may result in questionable transparency and difficult curation tasks. In addition, machines should have a deeper understanding of the given data, so that high-quality services can be installed. The paper first discusses the challenges of data management for the management of research data as used in empirical research. We conclude a comparison of referencing and copying strategies and reflect on their implications respectively. As a result from this argumentation, we elaborate on a data representation model, which we further examine in regard to considerable extensions. A Generating Model is subsequently introduced to enable citation, transparency and re-use. Eventually, we close with the demonstration of an explorative prototype for data access and investigate a distance metric for assisting in finding similar data sets and evaluating existing compositions. Title Document and archive: editing the past Abstract Document engineering has a difficult task: to propose tools and methods to manipulate contents and make sense of them. This task is still harder when dealing with archive, insofar as document engineering has not only to provide tools for expressing sense but above all tools and methods to keep contents accessible in their integrity and intelligible according to their meaning. However, these objectives may be contradictory: access implies to transform contents to make them accessible through networks, tools and devices. Intelligibility may imply to adapt contents to the current state of knowledge and capacity of understanding. But, by doing that, can we still speak of authenticity, integrity, or even the identity of documents? Document engineering has provided powerful means to express meaning and to turn an intention into a semiotic expression. Document repurposing has become a usual way for exploiting libraries, archives, etc. By enabling to reuse a specific part of a given content, repurposing techniques allow to entirely renegotiate the meaning of this part by changing its context, its interactivity, in short the way people can consider this piece of content and interpret it. Put in this way, there could be an antinomy between archiving and document engineering. However, transforming document, editing content is an efficient way to keep them alive and compelling for people. Preserving contents does not consist in simply storing them but in actively transforming them to adapt them technically and keep them intelligible. Editing the past is then a new challenge, merging a content deontology with a document technology. This challenge implies to redefine some classical notions as authenticity and highlight the needs for new concepts and methods. Especially in a digital world, documents are permanently reconfigured by technical tools that produce variants, similar contents calling into question the usual definition the identity of documents. Editing the past calls for a new critics of variants. Title Structural and visual comparisons for web page archiving Abstract In this paper, we propose a Web page archiving system that combines state-of-the-art comparison methods based on the source codes of Web pages, with computer vision techniques. To detect whether successive versions of a Web page are similar or not, our system is based on: (1) a combination of structural and visual comparison methods embedded in a statistical discriminative model, (2) a visual similarity measure designed for Web pages that improves change detection, (3) a supervised feature selection method adapted to Web archiving. We train a Support Vector Machine model with vectors of similarity scores between successive versions of pages. The trained model then determines whether two versions, defined by their vector of similarity scores, are similar or not. Experiments on real archives validate our approach. Title DocExplore: overcoming cultural and physical barriers to access ancient documents Abstract In this paper, we describe DocExplore, an integrated software suite centered on the handling of digitized documents with an emphasis on ancient manuscripts. This software suite allows the augmentation and exploration of ancient documents of cultural interest. Specialists can add textual and multimedia data and metadata to digitized documents through a graphical interface that does not require technical knowledge. They are helped in this endeavor by sophisticated document analysis tools that allows for instance to spot words or patterns in images of documents. The suite is intended to ease considerably the process of bringing locked away historical materials to the attention of the general public by covering all the steps from managing a digital collection to creating interactive presentations suited for cultural exhibitions. Its genesis and sustained development reside in a collaboration of archivists, historians and computer scientists, the latter being not only in charge of the development of the software, but also of creating and incorporating novel pattern recognition for document analysis techniques. Title In search of a good novel, neither reading activity nor querying matter, but examining search results does Abstract Borrowing novels is a major activity in public libraries. However, the interest in developing tools for fiction searching and analyzing the use of these tools is minor. It is studied how tools provided by an enriched public library catalogue are used to access novels to read. 58 users searched for interesting novels to read in a simulated situation where they had only a vague idea of what they would like to read. Data consist of search logs, pre and post search questionnaires and observations. For analyzing associations between novel reading activity, search variables and search success Pearson correlation coefficients were calculated. Based on this information, path models were built for predicting search success, i.e. the interest ratings of the novels found. Investing effort on examining results improves search success, i.e. finding interesting novels, whereas effort in querying has no bearing on it. Novel reading activity was not associated with search process and effort and success variables observed. The results suggest, that in designing systems for fiction retrieval, enriching result presentation with more detailed book information would benefit users in identifying good novels. Title Unlocking radio broadcasts: user needs in sound retrieval Abstract This poster reports the preliminary results of a user study uncovering the information seeking behaviour of humanities scholars dedicated to radio research. The study is part of an interdisciplinary research project on radio culture and auditory resources. The purpose of the study is to inform the design of information architecture and interaction design of a research infrastructure that will enable future radio and audio based research. Results from a questionnaire survey on humanities scholars' research interest and information needs, preferred access points, and indexing levels are reported. Finally, a flexible metadata schema is suggested, that includes both general metadata and highly media and research project specific metadata. Title 'Erasmus': an organization- and user-centered dublin core metadata tool Abstract Digital library interoperability is supported by good quality metadata. The design of metadata creation and management tools is therefore an important component of overall digital library design. A number of factors affect metadata tool usability, including task complexity, interface usability, and organizational context of use. These issues are being addressed in the user-centered design of a metadata tool for the Internet Public Library. Title Categorization of computing education resources with utilization of crowdsourcing Abstract The Ensemble Portal harvests resources from multiple heterogeneous federated collections. Managing these dynamically increasing collections requires an automatic mechanism to categorize records in to corresponding topics. We propose an approach to use existing ACM DL metadata to build classifiers for harvested resources in the Ensemble project. We also present our experience with utilizing the Amazon Mechanical Turk platform to build ground truth training data sets from Ensemble collections. CCS Information systems Information systems applications Computational advertising CCS Information systems Information systems applications Computing platforms CCS Information systems World Wide Web Web searching and information discovery CCS Information systems World Wide Web Online advertising CCS Information systems World Wide Web Web mining CCS Information systems World Wide Web Web applications CCS Information systems World Wide Web Web interfaces CCS Information systems World Wide Web Web services CCS Information systems World Wide Web Web data description languages CCS Information systems Information retrieval Document representation CCS Information systems Information retrieval Information retrieval query processing CCS Information systems Information retrieval Users and interactive retrieval CCS Information systems Information retrieval Retrieval models and ranking CCS Information systems Information retrieval Retrieval tasks and goals CCS Information systems Information retrieval Evaluation of retrieval results CCS Information systems Information retrieval Search engine architectures and scalability CCS Information systems Information retrieval Specialized information retrieval CCS Security and privacy Cryptography Key management CCS Security and privacy Cryptography Public key (asymmetric) techniques CCS Security and privacy Cryptography Symmetric cryptography and hash functions CCS Security and privacy Cryptography Cryptanalysis and other attacks Title SCOTT: set cover tracing technology Abstract In this paper, we describe SCOTT: a demonstration system that uses the Set Cover Tracing algorithm for determining the source of pirate content. This algorithm is very efficient in dealing with collusion attacks - the performance is close to linear in the number of colluders. However, the algorithm is based on the Set Cover Problem, which is known to be NP hard. SCOTT confirms the assertion in the original paper that a set cover algorithm is efficient in this particular application. The SCOTT system is suitable for use in a commercial application; the most notable of which is tracing the source of pirate Blu-ray movies. (Blu-ray players contain a built-in tracing traitors key assignment.) It also contains a visualization of the tracing process. After each pirate movie, SCOTT displays the universal of all players and its estimate of guilt for each player. Title Attack models and scenarios for networked control systems Abstract Cyber-secure networked control is modeled, analyzed, and experimentally illustrated in this paper. An attack space defined by the adversary's system knowledge, disclosure, and disruption resources is introduced. Adversaries constrained by these resources are modeled for a networked control system architecture. It is shown that attack scenarios corresponding to replay, zero dynamics, and bias injection attacks can be analyzed using this framework. An experimental setup based on a quadruple-tank process controlled over a wireless network is used to illustrate the attack scenarios, their consequences, and potential counter-measures. Title Analyzing spammers' social networks for fun and profit: a case study of cyber criminal ecosystem on twitter Abstract In this paper, we perform an empirical analysis of the cyber criminal ecosystem on Twitter. Essentially, through analyzing inner social relationships in the criminal account community, we find that criminal accounts tend to be socially connected, forming a small-world network. We also find that criminal hubs, sitting in the center of the social graph, are more inclined to follow criminal accounts. Through analyzing outer social relationships between criminal accounts and their social friends outside the criminal account community, we reveal three categories of accounts that have close friendships with criminal accounts. Through these analyses, we provide a novel and effective criminal account inference algorithm by exploiting criminal accounts' social relationships and semantic coordinations. Title Instruction embedding for improved obfuscation Abstract Disassemblers generally assume that assembly language instructions do not overlap, therefore, an obvious obfuscation against such disassemblers is to overlap instructions. This is difficult to implement, however, as the number of instructions existing in a program which can be overlapped are typically very few. We propose a modification of instruction overlapping which instead Title Dickie George: looking back on 40 years at the NSA Abstract Title SENTINEL: securing database from logic flaws in web applications Abstract Logic flaws within web applications allow the attackers to disclose or tamper sensitive information stored in back-end databases, since the web application usually acts as the single trusted user that interacts with the database. In this paper, we model the web application as an extended finite state machine and present a black-box approach for deriving the application specification and detecting malicious SQL queries that violate the specification. Several challenges arise, such as how to extract persistent state information in the database and infer data constraints. We systematically extract a set of invariants from observed SQL queries and responses, as well as session variables, as the application specification. Any suspicious SQL queries that violate corresponding invariants are identified as potential attacks. We implement a prototype detection system SENTINEL (SEcuriNg daTabase from logIc flaws iN wEb appLication) and evaluate it using a set of real-world web applications. The experiment results demonstrate the effectiveness of our approach and show that acceptable performance overhead is incurred by our implementation. Title SWIPE: eager erasure of sensitive data in large scale systems software Abstract We describe SWIPE, an approach to reduce the life time of sensitive, memory resident data in large scale applications written in C. In contrast to prior approaches that used a delayed or lazy approach to the problem of erasing sensitive data, SWIPE uses a novel Title Cryptanalysis of the stream cipher BEAN Abstract BEAN is a recent stream cipher proposal that uses Feedback with Carry Shift Registers (FCSRs) and an output function. There is a sound motivation behind the use of FCSRs in BEAN as they provide several cryptographically interesting properties. In this paper, we show that the output function is not optimal. We give an efficient distinguisher and a key recovery attack that is slightly better than brute force, requiring no significant memory. We then show how this attack can be made better with access to more keystream. Already with access to 6 KiB, the 80-bit key is recovered in time 2 Title Rethinking cyber security Abstract It is clear that the Internet is transforming the way we live and the recent decades have witnessed dramatic developments in information and communication technologies (ICT). Along with the phenomenal growth in technology enabled information economy has been a growth in computing technology related crimes. Security and privacy issues have become increasingly significant over the years and expected to continue to dominate the technology scene with the increased focus on digital economy and dramatic growth in social networking and the adoption of technologies such as cloud computing by businesses. In this talk I will begin with a brief look at current trends in the technology scenery and some of the key security issues that are impacting on business and society. In particular, I will describe the notion that I refer to as increasing threat velocity, with more and more attacks and their dynamic nature, an evolving set of bad guys with different motives and sophisticated easy to use tools readily available for ordinary users to conduct severe attacks. Hence there is a need for security professionals and researchers to rethink about cyber threats and how to respond to them. In this regard, we will examine attribution which is one of the key issues when it comes to counteracting security attacks. The unauthenticated nature of the Internet makes attribution difficult and furthermore has implications on accountability. Then the talk will focus on attacks and risks in cloud computing, where issues of security, trust and accountability are particularly significant. Cloud computing with its shared multi-tenancy environment aggravates the traditional security threats. Trust that cloud providers will provide proper security measures to counteract the security threats and ensure availability of services and data stored data become paramount. We will conclude the talk by discussing some key security technologies that are relevant for cloud services. Title Algebraic analysis of the SSS stream cipher Abstract Both the SSS and SOBER-t32 stream cipher designs use a single word-based shift register and a nonlinear filter function to produce keystream. In this paper we show that the algebraic attack method previously applied to SOBER-t32 is prevented from succeeding on SSS by the use of the keydependent substitution box (SBox) in the nonlinear filter of SSS. Additional assumptions and modifications to the SSS cipher in an attempt to enable algebraic analysis result in other difficulties that also render the algebraic attack infeasible. Based on these results, we conclude that a well-chosen key-dependent substitution box used in the nonlinear filter of the stream cipher provides resistance against such algebraic attacks. CCS Security and privacy Cryptography Information-theoretic techniques CCS Security and privacy Cryptography Mathematical foundations of cryptography Title Knowledge representation in ICU communication Abstract The need to improve team communication among health care providers is imperative in order to improve quality and reduce costs. Since most patients admitted to the Intensive Care Unit (ICU) suffer life threatening adverse events [1-2], there must be an effective and efficient communication protocol that facilitates workflow among the clinical team. In this paper, we studied the significance of communication at the ICU and ways to improve it. Through literature review, we identified and analyzed current research methods in order to locate the areas that require further exploration. Based on research methods review, we proposed our methodology to further comprehend the communication framework at the ICU which enables identifying factors that enhance and limit the communication process. This research proposes that through data collection, first hand and from literature, more communication factors can be identified. Through better understanding, we aim at building a knowledge base which will serve as the foundation to our long term goal of building an ontology-driven educational tool. Such a tool will be used to educate clinicians about miscommunication issues and as a means to improve it. The ultimate goal of our research is through improving clinical communication to reduce medical errors and costs and hence, enhance patient safety. Title Querying RDF dictionaries in compressed space Abstract The use of dictionaries is a common practice among those applications performing on huge RDF datasets. It allows long terms occurring in the RDF triples to be replaced by short IDs which reference them. This decision greatly compacts the dataset and mitigates the scalability issues underlying to its management. However, the dictionary size is not negligible and the techniques used for its representation also suffer from scalability limitations. This paper focuses on this scenario by adapting compression techniques for string dictionaries to the case of RDF. We propose a novel technique: Title Folded codes from function field towers and improved optimal rate list decoding Abstract We give a new construction of algebraic codes which are efficiently list decodable from a fraction 1-R-ε of adversarial errors where R is the rate of the code, for any desired positive constant ε. The worst-case list size output by the algorithm is O(1/ε), matching the existential bound for random codes up to constant factors. Further, the alphabet size of the codes is a constant depending only on ε --- it can be made exp(~O(1/ε In comparison, algebraic codes achieving the optimal trade-off between list decodability and rate based on folded Reed-Solomon codes have a decoding complexity of N Title Interactive information complexity Abstract The primary goal of this paper is to define and study the interactive information complexity of functions. Let f(x,y) be a function, and suppose Alice is given x and Bob is given y. Informally, the interactive information complexity IC(f) of f is the least amount of information Alice and Bob need to reveal to each other to compute f. Previously, information complexity has been defined with respect to a prior distribution on the input pairs (x,y). Our first goal is to give a definition that is independent of the prior distribution. We show that several possible definitions are essentially equivalent. We establish some basic properties of the interactive information complexity IC(f). In particular, we show that IC(f) is equal to the amortized (randomized) communication complexity of f. We also show a direct sum theorem for IC(f) and give the first general connection between information complexity and (non-amortized) communication complexity. This connection implies that a non-trivial exchange of information is required when solving problems that have non-trivial communication complexity. We explore the information complexity of two specific problems - Equality and Disjointness. We show that only a constant amount of information needs to be exchanged when solving Equality with no errors, while solving Disjointness with a constant error probability requires the parties to reveal a linear amount of information to each other. Title Word-based self-indexes for natural language text Abstract The inverted index supports efficient full-text searches on natural language text collections. It requires some extra space over the compressed text that can be traded for search speed. It is usually fast for single-word searches, yet phrase searches require more expensive intersections. In this article we introduce a different kind of index. It replaces the text using essentially the same space required by the compressed text alone (compression ratio around 35%). Within this space it supports not only decompression of arbitrary passages, but efficient word and phrase searches. Searches are orders of magnitude faster than those over inverted indexes when looking for phrases, and still faster on single-word searches when little space is available. Our new indexes are particularly fast at We adapt Title Tracking aggregate vs. individual gaze behaviors during a robot-led tour simplifies overall engagement estimates Abstract As an early behavioral study of what non-verbal features a robot tourguide could use to analyze a crowd, personalize an interaction and/or maintain high levels of engagement, we analyze participant gaze statistics in response to a robot tour guide's deictic gestures. There were thirty-seven participants overall split into nine groups of three to five people each. In groups with the lowest engagement levels aggregate gaze responses in response to the robot deictic gesture involved the fewest total glance shifts, least time spent looking at indicated object and no intra-participant gaze. Our diverse participants had overlapping engagement ratings within their group, and we found that a robot that tracks group rather than individual analytics could capture less noisy and often stronger trends relating gaze features to self-reported engagement scores. Thus we have found indications that aggregate group analysis captures more salient and accurate assessments of overall Title Bounds on locally testable codes with unique tests Abstract The The computational complexity notion of a PCP is closely related to the combinatorial notion of a In light of the strong connection between PCPs and LTCs, one may conjecture the existence of LTCs with properties similar to the ones required by the UGC. In this work we show limitations on such LTCs: We consider 2-query LTCs with codeword testers that only make unique tests. Roughly speaking, we show that any such LTC with relative distance close to 1, almost-perfect completeness and low-soundness, is of constant size. While our result does not imply anything about the correctness of the UGC, it does show some limitations of unique tests, compared, for example, to projection tests. Title On iterative compressed sensing reconstruction of sparse non-negative vectors Abstract We consider the iterative reconstruction of the Compressed Sensing (CS) problem over reals. The iterative reconstruction allows interpretation as a channel-coding problem, and it guarantees perfect reconstruction for properly chosen measurement matrices and sufficiently sparse error vectors. In this paper, we give a summary on reconstruction algorithms for compressed sensing and examine how the iterative reconstruction performs on quasi-cyclic low-density parity check (QC-LDPC) measurement matrices. Title Deterministic capacity modeling for cellular channels: building blocks, approximate regions, and optimal transmission strategies Abstract One of the tools that arised in the context of capacity approximations is the Title High-rate codes with sublinear-time decoding Abstract Locally decodable codes are error-correcting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding algorithms has been well studied, and it has widely been suspected that nontrivial locality must come at the price of low rate. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(k In this paper we construct a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every ε > 0 and α > 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 - α, and is locally decodable from a constant fraction of errors using O(k These codes, which we call multiplicity codes, are based on evaluating high degree multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial based codes; they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in their rate and distance. CCS Security and privacy Formal methods and theory of security Trust frameworks CCS Security and privacy Formal methods and theory of security Security requirements CCS Security and privacy Formal methods and theory of security Formal security models CCS Security and privacy Formal methods and theory of security Logic and verification Title Towards automatic verification of affine hybrid system stability Abstract Title Reasoning about systems with many processes Abstract Title PVS - design for a practical verification system Abstract Title An approach to program verification Abstract CCS Security and privacy Security services Authentication CCS Security and privacy Security services Access control Title A comprehensive privacy-aware authorization framework founded on HIPAA privacy rules Abstract Health care entities publish privacy polices that are aligned with government regulations such as Health Insurance Portability and Accountability Act (HIPPA) and promise to use and disclose health data according to the stated policies. However actual practices may deliberately or unintentionally violate these policies. To ensure enforcement of such policies and ultimately HIPAA compliancy there is a need to develop an enforcement mechanism. In this paper we extend our work on IT-enforceable policies, submitted to the International Journal of Medical Informatics. The submitted work involved a detailed analysis of HIPPA privacy rules to extract object related conditions needed to make a disclosure decision. In this paper we extend this work to propose machine enforceable policies that embody HIPAA privacy disclosure rules and a health care entity access control rules. We also propose a comprehensive access/privacy control architecture that enforces the proposed polices. The architectural model is designed to allow for a dynamic configuration of policies without reconfiguring the architecture responsible for enforcement. Both the proposed policies and the architecture allow for multiple stakeholders to adjust the privacy preferences to manage the disclosure of data by adjusting the designated parameters in their respective policies. The objective of this study is to provide a comprehensive model for privacy protection, access and logging of PHI, that is HIPAA compliant. Title MeD-Lights: a usable metaphor for patient controlled access to electronic health records Abstract are poised to replace paper- based medical health records--EHRs show the promise of improving medical care by providing immediate access to a patient's records without having to worry about human-introduced delays. At the same time, mobile devices such as smartphones enable users to maintain their own medical information such as In this paper we describe and evaluate MeD-Lights, a model that leverages the metaphor of traffic light colors (red, yellow, and green) to portray sensitivity levels of records, and how they should be shared with medical personnel. We implemented a MeD-Lights application on the Android platform and performed a user study using smartphones and show that the semantics of sharing we attach to these colors are indeed intuitive to users and users can use them effectively to manage access to their EHRs. Title State-of-the-art cloud computing security taxonomies: a classification of security challenges in the present cloud computing environment Abstract Cloud computing has taken center stage in the present business scenario due to its pay-as-you-use nature, where users need not bother about buying resources like hardware, software, infrastructure, etc. permanently. As much as the technological benefits, cloud computing also has risks involved. By looking at its financial benefits, customers who cannot afford initial investments, choose cloud by compromising on the security concerns. At the same time due to its risks, customers -- relatively majority in number, avoid migration towards cloud. This paper analyzes the current security challenges in cloud computing environment based on state-of-the-art cloud computing security taxonomies under technological and process-related aspects. Title Privacy rights management in multiparty multilevel DRM system Abstract Traditional Digital Rights Management (DRM) systems are one level distributor system which involve single distributor. However, for a flexible and scalable content distribution mechanism, it is necessary to accommodate multiple distributors in DRM model so that different strategies can be implemented in diverse geographical areas. We develop a multiparty multilevel DRM model using facility location and design a prototype DRM system that provides transparent and flexible content distribution mechanism while maintaining the users' privacy along with accountability in the system. Title First step towards automatic correction of firewall policy faults Abstract Firewalls are critical components of network security and have been widely deployed for protecting private networks. A firewall determines whether to accept or discard a packet that passes through it based on its policy. However, most real-life firewalls have been plagued with policy faults, which either allow malicious traffic or block legitimate traffic. Due to the complexity of firewall policies, manually locating the faults of a firewall policy and further correcting them are difficult. Automatically correcting the faults of a firewall policy is an important and challenging problem. In this article, we first propose a fault model for firewall policies including five types of faults. For each type of fault, we present an automatic correction technique. Second, we propose the first systematic approach that employs these five techniques to automatically correct all or part of the misclassified packets of a faulty firewall policy. Third, we conducted extensive experiments to evaluate the effectiveness of our approach. Experimental results show that our approach is effective to correct a faulty firewall policy with three of these types of faults. Title Relating declarative semantics and usability in access control Abstract Usability is widely recognized as a problem in the context of the administration of access control systems. We seek to relate the notion of declarative semantics, a recurring theme in research in access control, with usability. We adopt the concrete context of POSIX ACLs and the traditional interface for it that comprises two utilities getfacl and setfacl whose natural semantics is operational. We have designed and implemented an alternate interface that we call askfacl whose natural semantics is declarative. We discuss our design of askfacl. We then discuss a human-subject usability study that we have designed and conducted that compares the two interfaces. Our results measurably demonstrate the goodness of declarative semantics in access control. Title A framework integrating attribute-based policies into role-based access control Abstract Integrated role-based access control (RBAC) and attribute-based access control (ABAC) is emerging as a promising paradigm. This paper proposes a framework that uses attribute-based policies to create a more traditional RBAC model. RBAC has been widely used, but has weaknesses: it is labor-intensive and time-consuming to build a model instance, and a pure RBAC system lacks flexibility to efficiently adapt to changing users, objects, and security policies. Particularly, it is impractical to manually make (and maintain) user to role assignments and role to permission assignments in industrial context characterized by a large number of users and/or security objects. ABAC has features complimentary to RBAC, and merging RBAC and ABAC has become an important research topic. This paper proposes a new approach to integrating ABAC with RBAC, by modeling RBAC in two levels. The aboveground level is a standard RBAC model extended with "environment". This level retains the simplicity of RBAC, supporting RBAC model verification/review. The "underground" level is used to represent security knowledge in terms of attribute-based policies, which automatically create the simple RBAC model in the aboveground level. These attribute-based policies bring to RBAC the advantages of ABAC: they are easy to build and easy to adapt to changes. Using this framework, we tackle the problem of permission assignment for large scale applications. This model is motivated by the characteristics and requirements of industrial control systems, and reflects in part certain approaches and practices common in the industry. Title A cloud-based RDF policy engine for assured information sharing Abstract In this paper, we describe a general-purpose, scalable RDF policy engine. The innovations in our work include seamless support for a diverse set of security policies enforced by a highly available and scalable policy engine designed using a cloud-based platform. Our main goal is to demonstrate how coalition agencies can share information stored in multiple formats, through the enforcement of appropriate policies. Title Generative models for access control policies: applications to role mining over logs with attribution Abstract We consider a fundamentally new approach to role and policy mining: finding RBAC models which reflect the observed We have evaluated our approach on a large number of real life data sets, and our algorithms produce good role decompositions as measured by metrics such as Title Algorithms for mining meaningful roles Abstract Role-based access control (RBAC) offers significant advantages over lower-level access control policy representations, such as access control lists (ACLs). However, the effort required for a large organization to migrate from ACLs to RBAC can be a significant obstacle to adoption of RBAC. Role mining algorithms partially automate the construction of an RBAC policy from an ACL policy and possibly other information, such as user attributes. These algorithms can significantly reduce the cost of migration to RBAC. This paper proposes new algorithms for role mining. The algorithms can easily be used to optimize a variety of policy quality metrics, including metrics based on policy size, metrics based on interpretability of the roles with respect to user attribute data, and compound metrics that consider size and interpretability. The algorithms all begin with a phase that constructs a set of candidate roles. We consider two strategies for the second phase: start with an empty policy and repeatedly add candidate roles, or start with the entire set of candidate roles and repeatedly remove roles. In experiments with publicly available access control policies, we find that the elimination approach produces better results, and that, for a policy quality metric that reflects size and interpretability, our elimination algorithm achieves significantly better results than previous work. CCS Security and privacy Security services Pseudonymity, anonymity and untraceability CCS Security and privacy Security services Privacy-preserving protocols CCS Security and privacy Security services Digital rights management CCS Security and privacy Security services Authorization Title A multi-layer tree model for enterprise vulnerability management Abstract Conducting enterprise-wide vulnerability assessment (VA) on a regular basis plays an important role in assessing an enterprise's information system security status. However, an enterprise network is usually very complex, divided into different types of zones, and consisting of hundreds of hosts in the networks. The complexity of IT systems makes VA an extremely time-consuming task for security professionals. They are seeking for an automated tool that helps monitor and manage the overall vulnerability of an enterprise. This paper presents a novel methodology that provides a dashboard solution for managing enterprise level vulnerability. In our methodology, we develop a multi-layer tree based model to describe enterprise vulnerability topology. Then we apply a client/server structure to gather vulnerability information from enterprise resources automatically. Finally a set of well-defined metric formulas is applied to produce a normalized vulnerability score for the whole enterprise. As a prototype, we developed the implementation of our methodology, EVMAT, an Enterprise Vulnerability Management and Assessment Tool, to test our method. Experiments on a small E-commerce company and a small IT company demonstrate the great potentials of our tool for enterprise-level security. CCS Security and privacy Intrusion/anomaly detection and malware mitigation Malware and its mitigation CCS Security and privacy Intrusion/anomaly detection and malware mitigation Intrusion detection systems CCS Security and privacy Intrusion/anomaly detection and malware mitigation Social engineering attacks CCS Security and privacy Security in hardware Tamper-proof and tamper-resistant designs CCS Security and privacy Security in hardware Embedded systems security CCS Security and privacy Security in hardware Hardware security implementation CCS Security and privacy Security in hardware Hardware attacks and countermeasures CCS Security and privacy Security in hardware Hardware reverse engineering CCS Security and privacy Systems security Operating systems security CCS Security and privacy Systems security Browser security CCS Security and privacy Systems security Distributed systems security CCS Security and privacy Systems security Information flow control Title Securing the e-health cloud Abstract Modern information technology is increasingly used in healthcare with the goal to improve and enhance medical services and to reduce costs. In this context, the outsourcing of computation and storage resources to general IT providers (cloud computing) has become very appealing. E-health clouds offer new possibilities, such as easy and ubiquitous access to medical data, and opportunities for new business models. However, they also bear new risks and raise challenges with respect to security and privacy aspects. In this paper, we point out several shortcomings of current e-health solutions and standards, particularly they do not address the client platform security, which is a crucial aspect for the overall security of e-health systems. To fill this gap, we present a security architecture for establishing privacy domains in e-health infrastructures. Our solution provides client platform security and appropriately combines this with network security concepts. Moreover, we discuss further open problems and research challenges on security, privacy and usability of e-health cloud systems. Title Addressing covert termination and timing channels in concurrent information flow systems Abstract When termination of a program is observable by an adversary, confidential information may be leaked by terminating accordingly. While this termination covert channel has limited bandwidth for sequential programs, it is a more dangerous source of information leakage in concurrent settings. We address concurrent termination and timing channels by presenting a dynamic information-flow control system that mitigates and eliminates these channels while allowing termination and timing to depend on secret values. Intuitively, we leverage concurrency by placing such potentially sensitive actions in separate threads. While termination and timing of these threads may expose secret values, our system requires any thread observing these properties to raise its information-flow label accordingly, preventing leaks to lower-labeled contexts. We implement this approach in a Haskell library and demonstrate its applicability by building a web server that uses information-flow control to restrict untrusted web applications. Title Keeping information safe from social networking apps Abstract The ability of third-party applications to aggregate and re-purpose personal data is a fundamental privacy weakness in today's social networking platforms. Prior work has proposed sandboxing in a hosted cloud infrastructure to prevent leakage of user information [22]. In this paper, we extend simple sandboxing to allow sharing of information among friends in a social network, and to help application developers securely aggregate user data according to differential privacy properties. Enabling these two key features requires preventing, among other subtleties, a new "Kevin Bacon" attack aimed at aggregating private data through a social network graph. We describe the significant architectural and security implications for the application framework in the Web (JavaScript) application, backend cloud, and user data handling. Title Towards a policy enforcement infrastructure for distributed usage control Abstract Distributed usage control is concerned with how data may or may not be used after initial access to it has been granted and is therefore particularly important in distributed system environments. We present an application- and application-protocol-independent infrastructure that allows for the enforcement of usage control policies in a distributed environment. We instantiate the infrastructure for transferring files using FTP and for a scenario where smart meters are connected to a Facebook application. Title A model of information flow control to determine whether malfunctions cause the privacy invasion Abstract Privacy is difficult to assure in complex systems that collect, process, and store data about individuals. The problem is particularly acute when data arise from sensing physical phenomena as individuals are unlikely to realise that actions such as walking past a building generate privacy-sensitive data. Information Flow Control (IFC) is a mature technique for managing security and privacy concerns in large distributed systems. This paper describes (i) how the meta-data required by IFC, in the form of tags, can reflect the physical properties of sensors; and (ii) how the formal expression of the IFC this allows can be used to, statically, determine the proportion of the system that handles private data and how this changes in the face of software or human malfunctions. Title High false positive detection of security vulnerabilities: a case study Abstract Static code analysis is an emerging technique for secure software development that analyzes large software code bases without execution to reveal potential vulnerabilities present in the code. These vulnerabilities include but are not limited to SQL injections, buffer overflows, cross site scripting, improper security settings, and information leakage. Software developers can spend many man-hours to track and fix the flagged vulnerabilities. Surveys show that a high percentage of discovered vulnerabilities are actually false positives. This paper presents a case study that found that context information regarding libraries could account for many of the false positives. We suggest future research incorporate context information into static analysis tools for security. Title A sense of others: behavioral attestation of UNIX processes on remote platforms Abstract Remote attestation is a technique in Trusted Computing to verify the trustworthiness of a client platform. The most well-known method of verifying the client system to the remote end is the Integrity Measurement Architecture (IMA). IMA relies on the hashes of applications to prove the trusted state of the target system to the remote challenger. This hash-based approach leads to several problems including highly rigid target domains. To overcome these problems several dynamic attestation techniques have been proposed. These techniques rely on the runtime behavior of an application or data structures and sequence of system calls. In this paper we propose a new attestation technique that relies on the seminal work done in Sequence Time Delay Embedding (STIDE). We present our target architecture in which the client end is leveraged with STIDE and the short sequences of system call patterns associated with a process are measured and reported to the challenger. Furthermore, we investigate how this technique can shorten the reported data as compared to other system call-based attestation techniques. The primary advantage of this technique is to detect zero-day malware at the client platform. There are two most important metrics for the successful implementation of dynamic behavior attestation. One is the time required for processing on the target system and second is the network overhead. In our proposed model we concentrate on maximizing the efficiency of these metrics. Title Protecting health information on mobile devices Abstract Mobile applications running on devices such as smart phones and tablets will be increasingly used to provide convenient access to health information to health professionals and patients. Also, patients will use these devices to transmit health information captured by sensing devices in settings like the home to remote repositories. As mobile devices become targets of security threats, we must address the problem of protecting sensitive health information on them. We explore key threats to data on mobile devices and develop a security framework that can help protect it against such threats. We implemented this framework in the Android operating system and augmented it with user consent detection to enhance user awareness and control over the use of health information. Our framework can be used to enforce security policies that govern access to sensitive health data on mobile devices. Physicians and patients using our framework can install third-party healthcare applications with the guarantee that sensitive medical information will not be sent without their knowledge even when these applications are compromised. We describe the key mechanisms implemented by our framework and how they can enforce a security policy. We also discuss our early experience with the framework. Title Information flow analysis for javascript Abstract Modern Web 2.0 pages combine scripts from several sources into a single client-side JavaScript program with almost no isolation. In order to prevent attacks from an untrusted third-party script or cross-site scripting, tracking provenance of data is imperative. However, no browser offers this security mechanism. This work presents the first information flow control mechanism for full JavaScript. We track information flow dynamically as much as possible but rely on intra-procedural static analysis to capture implicit flow. Our analysis handles even the dreaded eval function soundly and incorporates flow based on JavaScript's prototype inheritance. We implemented our analysis in a production JavaScript engine and report both qualitative as well as quantitative evaluation results. Title Towards ensuring client-side computational integrity Abstract Privacy is considered one of the key challenges when moving services to the Cloud. Solution like access control are brittle, while fully homomorphic encryption that is hailed as the silver bullet for this problem is far from practical. But would fully homomorphic encryption really be such an effective solution to the privacy problem? And can we already deploy architectures with similar security properties? We propose one such architecture that provides privacy, integrity and leverages the Cloud for availability while only using cryptographic building blocks available today. CCS Security and privacy Systems security Denial-of-service attacks CCS Security and privacy Systems security Firewalls CCS Security and privacy Systems security Vulnerability management CCS Security and privacy Systems security File system security CCS Security and privacy Network security Security protocols CCS Security and privacy Network security Web protocol security CCS Security and privacy Network security Mobile and wireless security CCS Security and privacy Network security Denial-of-service attacks CCS Security and privacy Network security Firewalls CCS Security and privacy Database and storage security Data anonymization and sanitization CCS Security and privacy Database and storage security Management and querying of encrypted data CCS Security and privacy Database and storage security Information accountability and usage control CCS Security and privacy Database and storage security Database activity monitoring CCS Security and privacy Software and application security Software security engineering Title Corrective Enforcement: A New Paradigm of Security Policy Enforcement by Monitors Abstract Runtime monitoring is an increasingly popular method to ensure the safe execution of untrusted codes. Monitors observe and transform the execution of these codes, responding when needed to correct or prevent a violation of a user-defined security policy. Prior research has shown that the set of properties monitors can enforce correlates with the latitude they are given to transform and alter the target execution. But for enforcement to be meaningful this capacity must be constrained, otherwise the monitor can enforce any property, but not necessarily in a manner that is useful or desirable. However, such constraints have not been significantly addressed in prior work. In this article, we develop a new paradigm of security policy enforcement in which the behavior of the enforcement mechanism is restricted to ensure that valid aspects present in the execution are preserved notwithstanding any transformation it may perform. These restrictions capture the desired behavior of valid executions of the program, and are stated by way of a preorder over sequences. The resulting model is closer than previous ones to what would be expected of a real-life monitor, from which we demand a minimal footprint on both valid and invalid executions. We illustrate this framework with examples of real-life security properties. Since several different enforcement alternatives of the same property are made possible by the flexibility of this type of enforcement, our study also provides metrics that allow the user to compare monitors objectively and choose the best enforcement paradigm for a given situation. Title Security-policy monitoring and enforcement with JavaMOP Abstract Software security attacks represent an ever growing problem. One way to make software more secure is to use Inlined Reference Monitors (IRMs), which allow security specifications to be inlined inside a target program to ensure its compliance with the desired security specifications. The IRM approach has been developed primarily by the security community. Runtime Verification (RV), on the other hand, is a software engineering approach, which is intended to formally encode system specifications within a target program such that those specifications can be later enforced during the execution of the program. Until now, the IRM and RV approaches have lived separate lives; in particular RV techniques have not been applied to the security domain, being used instead to aid program correctness and testing. This paper discusses the usage of a formalism-generic RV system, JavaMOP, as a means to specify IRMs, leveraging the careful engineering of the JavaMOP system for ensuring secure operation of software in an efficient manner. Title Towards a taint mode for cloud computing web applications Abstract Cloud computing is generally understood as the distribution of data and computations over the Internet. Over the past years, there has been a steep increase in web sites using this technology. Unfortunately, those web sites are not exempted from injection flaws and cross-site scripting, two of the most common security risks in web applications. Taint analysis is an automatic approach to detect vulnerabilities. Cloud computing platforms possess several features that, while facilitating the development of web applications, make it difficult to apply off-the-shelf taint analysis techniques. More specifically, several of the existing taint analysis techniques do not deal with persistent storage (e.g. object datastores), opaque objects (objects whose implementation cannot be accessed and thus tracking tainted data becomes a challenge), or a rich set of security policies (e.g. forcing a specific order of sanitizers to be applied). We propose a taint analysis for could computing web applications that consider these aspects. Rather than modifying interpreters or compilers, we provide taint analysis via a Python library for the cloud computing platform Google App Engine (GAE). To evaluate the use of our library, we harden an existing GAE web application against cross-site scripting attacks. Title Knowledge-oriented secure multiparty computation Abstract Protocols for We propose here a way to apply Title Hash-flow taint analysis of higher-order programs Abstract As web applications have grown in popularity, so have attacks on such applications. Cross-site scripting and injection attacks have become particularly problematic. Both vulnerabilities stem, at their core, from improper sanitization of user input. We propose static taint analysis, which can verify the absence of unsanitized input errors at compile-time. Unfortunately, precise static analysis of modern scripting languages like Python is challenging: higher-orderness and complex control-flow collide with opaque, dynamic data structures like hash maps and objects. The interdependence of data-flow and control-flow make it hard to attain both soundness and precision. In this work, we apply abstract interpretation to sound and precise taint-style static analysis of scripting languages. We first define λ We have prototyped the analytical framework for Python, and conducted preliminary experiments with web applications. A low rate of false alarms demonstrates the promise of this approach. Title Security correctness for secure nested transactions: position paper Abstract This article considers the synthesis of two long-standing lines of research in computer security: security correctness for multilevel databases, and language-based security. The motivation is an approach to supporting end-to-end security for a wide class of enterprise applications, those of concurrent transactional applications. The approach extends nested transactions with Title A generic approach for security policies composition: position paper Abstract When modelling access control in distributed systems, the problem of security policies composition arises. Much work has been done on different ways of combining policies, and using different logics to do this. In this paper, we propose a more general approach based on a 4-valued logic, that abstracts from the specific setting, and groups together many of the existing ways for combining policies. Moreover, we propose going one step further, by twisting the 4-valued logic and obtaining a more traditional approach that might therefore be more appropriate for analysis. Title Static flow-sensitive & context-sensitive information-flow analysis for software product lines: position paper Abstract A software product line encodes a potentially large variety of software products as variants of some common code base, e.g., through the use of #ifdef statements or other forms of conditional compilation. Traditional information-flow analyses cannot cope with such constructs. Hence, to check for possibly insecure information flow in a product line, one currently has to analyze each resulting product separately, of which there may be thousands, making this task intractable. We report about ongoing work that will instead enable users to check the security of information flows in entire software product lines in one single pass, without having to generate individual products from the product line. Executing the analysis on the product line promises to be orders of magnitude more faster than analyzing products individually. We discuss the design of our information-flow analysis and our ongoing implementation using the IFDS/IDE framework by Reps, Horwitz and Sagiv. NA 4 Citations Title Universal language for ERP's Abstract The ERP's (Enterprise Resource Planning) have been playing key roles in aid management processes in companies. Over the years, these management tools have evolved constantly monitoring the effect of increasing challenges required to Enterprises. The globalization associated with the recent economic crisis has brought new competition among the companies needs. One of the most experienced is the externalization of information. This article seeks to illustrate a solution to that ERPs can realize the challenge, presenting an abstraction compared to agents that communicate with each other. Title Applying random projection to the classification of malicious applications using data mining algorithms Abstract This research is part of a continuing effort to show the viability of using random projection as a feature extraction and reduction technique in the classification of malware to produce more accurate classifiers. In this paper, we use a vector space model with n-gram analysis to produce weighted feature vectors from binary executables, which we then reduce to a smaller feature set using the random projection method proposed by Achlioptas, and the feature selection method of mutual information to produce two separate data sets. We then apply several popular machine learning algorithms including J48 decision tree, naïve Bayes, support vector machines, and an instance-based learner to the data sets to produce classifiers for the detection of malicious executables. We evaluate the performance of the different classifiers and discover that using a data set reduced by random projection can improve the performance of support vector machine and instance-based learner classifiers. CCS Security and privacy Software and application security Web application security CCS Security and privacy Software and application security Social network security and privacy CCS Security and privacy Software and application security Domain-specific security and privacy architectures Title DRAP: a Robust Authentication protocol to ensure survivability of computational RFID networks Abstract The Wireless Identification and Sensing Platform (WISP) from Intel Research Seattle is an instance of Computational RFID (CRFID). Since WISP tags contain sensor data along with their Title Untraceable, anonymous and fair micropayment scheme Abstract The development of new applications of electronic commerce (e-commerce) that require the payment of small amounts of money to purchase services or goods opens new challenges in the security and privacy fields. This kind of payments are called micropayments and they have to provide a tradeoff between efficiency and security requirements to pay low-value items. In this paper we present a new efficient and secure micropayment scheme which fulfils the security properties that guarantee no financial risk for merchants and the privacy of the customers. In addition, the proposed system defines a fair exchange between the micropayment and the desired good or service. In this fair exchange, the anonymity and untraceability of the customers are assured. Finally, customers can request a refund whether they are no more interested on the services offered by merchants. Title On the security and practicality of a buyer seller watermarking protocol for DRM Abstract A buyer seller watermarking (BSW) protocol allows a seller of digital content to prove to a third party that a buyer illegally distributed copies of content when these copies are found. It also protects an honest buyer from being falsely accused of such an act by the seller. We examine the security and practicality of a recent BSW protocol for Digital Rights Management (BSW-DRM) proposed in SIN 2009. We show that the protocol contains weaknesses, which may result in successful replay, modification and content piracy. Furthermore, the heavy reliance on the fully trusted Certificate Authority has its security concern and it is also less practical to be applied in current digital content distribution systems. We further suggest possible improvements based on the many protocols proposed prior to this protocol. Title Echo hiding based stereo audio watermarking against pitch-scaling attacks Abstract In audio watermarking, the robustness against pitch-scaling attack, is one of the most challenging problems. In this paper, we propose an algorithm, based on traditional time-spread(TS) echo hiding based audio watermarking to solve this problem. In TS echo hiding based watermarking, pitch-scaling attack shifts the location of pseudonoise (PN) sequence which appears in the cepstrum domain. Thus, position of the peak, which occurs after correlating with PN-sequence changes by an un-known amount and that causes the error. In the proposed scheme, we replace PN-sequence with unit-sample sequence and modify the decoding algorithm in such a way it will not depend on a particular point in cepstrum domain for extraction of watermark. Moreover proposed algorithm is applied to stereo audio signals to further improve the robustness. Experimental results illustrate the effectiveness of the proposed algorithm against pitch-scaling attacks compared to existing methods. In addition to that proposed algorithm also gives better robustness against other conventional signal processing attacks. Title Understanding fraudulent activities in online ad exchanges Abstract Online advertisements (ads) provide a powerful mechanism for advertisers to effectively target Web users. Ads can be customized based on a user's browsing behavior, geographic location, and personal interests. There is currently a multi-billion dollar market for online advertising, which generates the primary revenue for some of the most popular websites on the Internet. In order to meet the immense market demand, and to manage the complex relationships between advertisers and publishers (i.e., the websites hosting the ads), marketplaces known as "ad exchanges" are employed. These exchanges allow publishers (sellers of ad space) and advertisers(buyers of this ad space) to dynamically broker traffic through ad networks to efficiently maximize profits for all parties. Unfortunately, the complexities of these systems invite a considerable amount of abuse from cybercriminals, who profit at the expense of the advertisers. In this paper, we present a detailed view of how one of the largest ad exchanges operates and the associated security issues from the vantage point of a member ad network. More specifically, we analyzed a dataset containing transactions for ingress and egress ad traffic from this ad network. In addition, we examined information collected from a command-and-control server used to operate a botnet that is leveraged to perpetrate ad fraud against the same ad exchange. Title Practical PIR for electronic commerce Abstract We extend Goldberg's multi-server information-theoretic private information retrieval (PIR) with a suite of protocols for privacy-preserving e-commerce. Our first protocol adds support for single-payee tiered pricing, wherein users purchase database records without revealing the indices or prices of those records. Tiered pricing lets the seller set prices based on each user's status within the system; e.g., non-members may pay full price while members may receive a discounted rate. We then extend tiered pricing to support group-based access control lists with record-level granularity; this allows the servers to set access rights based on users' price tiers. Next, we show how to do some basic bookkeeping to implement a novel top-K replication strategy that enables the servers to construct bestsellers lists, which facilitate faster retrieval for these most popular records. Finally, we build on our bookkeeping functionality to support multiple payees, thus enabling several sellers to offer their digital goods through a common database while enabling the database servers to determine to what portion of revenues each seller is entitled. Our protocols maintain user anonymity in addition to query privacy; that is, queries do not leak information about the index or price of the record a user purchases, the price tier according to which the user pays, the user's remaining balance, or even whether the user has ever queried the database before. No other priced PIR or oblivious transfer protocol supports tiered pricing, access control lists, multiple payees, or top-K replication, whereas ours supports all of these features while preserving PIR's sublinear communication complexity. We have implemented our protocols as an add-on to Percy++, an open source implementation of Goldberg's PIR scheme. Measurements indicate that our protocols are practical for deployment in real-world e-commerce applications. Title Semi-automated communication protocol security verification for watermarking - pros and cons illustrated on a complex application scenario Abstract The primary goal in this paper is to adapt and extend a recent concept and prototypical framework for (semi-)automated security verification of watermarking-based communication protocols based on the CASPER protocol modeling language and the FRD model checker. Therefore our paper extends the scope of watermarking research beyond signal processing and information theory investigations to include also protocol verification considerations as known e.g. from the field of cryptographic research. To be able to establish a clear picture of the potential prospects and the current restrictions of such a verification framework for watermarking-based communication protocols, we conceptualize, model, generate and (partially) verify an exemplary protocol for a complex watermarking-based application scenario that combines a multi-level data access structure and the assurance of the security aspects of confidentiality, authenticity and integrity. Our results show that, while the security aspects of communication confidentiality and entity-authenticity can actually be verified with the introduced approach, other security aspects which might be similarly verified are still lacking corresponding support in protocol modeling languages like CASPER. Title A study of feature subset evaluators and feature subset searching methods for phishing classification Abstract Phishing is a semantic attack that aims to take advantage of the naivety of users of electronic services (e.g. e-banking). A number of solutions have been proposed to minimize the impact of phishing attacks. The most accurate email phishing classifiers, that are publicly known, use machine learning techniques. Previous work in phishing email classification via machine learning have primarily focused on enhancing the classification accuracy by studying the addition of novel features, ensembles, or classification algorithms. This study follows a different path by taking advantage of previously proposed features. The primary focus of this paper is to enhance the classification accuracy of phishing email classifiers by finding an effective feature subset out of a number of previously proposed features, by evaluating various feature selection methods. The selected feature subset in this study resulted in a classification model with an Title Lexical URL analysis for discriminating phishing and legitimate websites Abstract A study that aims to evaluate the practical effectiveness of website classification by lexically analyzing URL tokens in addition to a novel tokenization mechanism to increase prediction accuracy. The study analyzes over 70,000 legitimate and phishing URLs collected over 6 months period from PhishTank Title Enhancing scalability in anomaly-based email spam filtering Abstract Spam has become an important problem for computer security because it is a channel for the spreading of threats such as computer viruses, worms and phishing. Currently, more than 85% of received emails are spam. Historical approaches to combat these messages, including simple techniques such as sender blacklisting or the use of email signatures, are no longer completely reliable. Many solutions utilise machine-learning approaches trained using statistical representations of the terms that usually appear in the emails. However, these methods require a time-consuming training step with labelled data. Dealing with the situation where the availability of labelled training instances is limited slows down the progress of filtering systems and offers advantages to spammers. In a previous work, we presented the first spam filtering method based on anomaly detection that reduces the necessity of labelling spam messages and only employs the representation of legitimate emails. We showed that this method achieved high accuracy rates detecting spam while maintaining a low false positive rate and reducing the effort produced by labelling spam. In this paper, we enhance that system applying a data reduction algorithm to the labelled dataset, finding similarities among legitimate emails and grouping them to form consistent clusters that reduce the amount of needed comparisons. We show that this improvement reduces drastically the processing time, while maintaining detection and false positive rates stable. CCS Security and privacy Software and application security Software reverse engineering Title Scope extension of an existing product line Abstract At the beginning, creating a product line needs a well defined and narrow scope to meet short time to market demands. When established, there is a tendency to broaden the scope and to cover more domains and products. We have undergone a scope extension of our medical diagnostic platform that was implemented while the platform and (existing) products were evolving. In this paper, we list best practices for the migration process and how to come to a sustainable solution without cannibalizing the existing platform and products. In particular, we describe our way of identification beneficial sub-domains using C/V analysis and give an example scenario with alignments in order to increase commonality. We explain the maturity considerations for deciding on reuse of existing implementations and a carve-out strategy to split existing assets into common modules and product-line specific extensions. Furthermore, we describe our best practices for making the scope extension sustainable in a long term, using various types of governance means. We briefly complement these experiences with further insights gained during execution of this endeavor. Title Identifying improvement potential in evolving product line infrastructures: 3 case studies Abstract Successful software products evolve continuously to meet the changing stakeholder requirements. For software product lines, an additional challenge is that variabilities, characteristics that vary among products, change as well over time. That challenge must be carefully tackled during the evolution of the product line infrastructure. This is a significant problem for many software development organizations, as practical guidelines on how to evolve core assets, and especially source code, are missing. This paper investigates how to achieve "good enough" variability management during the evolution of variation in software design and implementation assets. As a first contribution, we present a customizable goal-based approach which helps to identify improvement potential in existing core assets to ease evolution. To find concrete ways to improve the product line infrastructure, we list the typical symptoms of variability "code smells" and show how to refine them to root causes, questions, and finally to metrics that can be extracted from large code bases. As a second main contribution, we show how this method was applied to evaluate the reuse quality of three industrial embedded systems. These systems are implemented in C or C++ and use Conditional Compilation as the main variability mechanism. We also introduce the analysis and refactoring tool set that was used in the case studies and discuss the lessons learnt. Title History-sensitive heuristics for recovery of features in code of evolving program families Abstract A program family might degenerate due to unplanned changes in its implementation, thus hindering the maintenance of family members. This degeneration is often induced by feature code that is changed individually in each member without considering other family members. Hence, as a program family evolves over time, it might no longer be possible to distinguish between common and variable features. One of the imminent activities to address this problem is the history-sensitive recovery of program family's features in the code. This recovery process encompasses the analysis of the evolution history of each family member in order to classify the implementation elements according to their variability nature. In this context, this paper proposes history-sensitive heuristics for the recovery of features in code of degenerate program families. Once the analysis of the family history is carried out, the feature elements are structured as Java project packages; they are intended to separate those elements in terms of their variability degree. The proposed heuristics are supported by a prototype tool called RecFeat. We evaluated the accuracy of the heuristics in the context of 33 versions of 2 industry program families. They presented encouraging results regarding recall measures that ranged from 85% to 100%; whereas the precision measures ranged from 71% to 99%. Title Efficient synthesis of feature models Abstract Variability modeling, and in particular feature modeling, is a central element of model-driven software product line architectures. Such architectures often emerge from legacy code, but, unfortunately creating feature models from large, legacy systems is a long and arduous task. We address the problem of automatic synthesis of feature models from propositional constraints. We show that this problem is NP-hard. We design efficient techniques for synthesis of models from respectively CNF and DNF formulas, showing a 10- to 1000-fold performance improvement over known techniques for realistic benchmarks. Our algorithms are the first known techniques that are efficient enough to be applied to dependencies extracted from real systems, opening new possibilities of creating reverse engineering and model management tools for variability models. We discuss several such scenarios in the paper. Title Code-based variability model extraction for software product line improvement Abstract Successful Software Product Lines (SPLs) evolve over time. However, one practical problem is that during SPL evolution the core assets, especially the code, tend to become complicated and difficult to understand, use, and maintain. Typically, more and more problems arise over time with implicit or already lost adaptation knowledge about the interdependencies of the different system variants and the supported variability. In this paper, we present a model-based SPL improvement process that analyzes existing large-scale SPL reuse infrastructure to identify improvement potential with respective metrics. Since Conditional Compilation (CC) is one of the most widely used mechanisms to implement variability, we parse variability-related facts from preprocessor code. Then we automatically extract an implementation variability model, including product configuration and variation points that are structured in a hierarchical variability tree. The extraction process is presented with concrete measurement results from an industrial case study. Title Comparing and combining genetic and clustering algorithms for software component identification from object-oriented code Abstract Software component identification is one of the primary challenges in component based software engineering. Typically, the identification is done by analyzing existing software artifacts. When considering object-oriented systems, many approaches have been proposed to deal with this issue by identifying a component as a strongly related set of classes. We propose in this paper a comparison between the formulations and the results of two algorithms for the identification of software components: clustering and genetic. Our goal is to show that each of them has advantages and disadvantages. Thus, the solution we adopted is to combine them to enhance the results. Title An architectural approach to ensure globally consistent dynamic reconfiguration of component-based systems Abstract One of the key issues that should be considered when addressing reliable evolution is to place a software system in a consistent status before and after change. This issue becomes more critical at runtime because it may lead to the failure on running mission-critical systems. In order to place the affected elements in a safe state before dynamic changes take place, the notion of Title NASA's advanced multimission operations system: a case study in software architecture evolution Abstract Virtually all software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real-world software projects. However, research in this area has suffered from problems of validation; previous work has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real-world examples. To help address this problem, this paper presents a case study of an ongoing effort at the Jet Propulsion Laboratory to rearchitect the Advanced Multimission Operations System used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort, then demonstrate how approaches and formal methods from previous research in architecture evolution may be applied to this evolution while using languages and tools already in place at the Jet Propulsion Laboratory. Title Scripting a refactoring with Rascal and Eclipse Abstract To facilitate experimentation with creating new, complex refactorings, we want to reuse existing transformation and analysis code as orchestrated parts of a larger refactoring: i.e., to script refactorings. The language we use to perform this scripting must be able to deal with the diversity of languages, tools, analyses, and transformations that arise in practice. To illustrate one solution to this problem, in this paper we describe, in detail, a specific refactoring script for switching from the Visitor design pattern to the Interpreter design pattern. This script, written in the meta-programming language Rascal, and targeting an interpreter written in Java, extracts facts from the interpreter code using the Eclipse JDT, performs the needed analysis in Rascal, and then transforms the interpreter code using a combination of Rascal code and existing JDT refactorings. Using this script we illustrate how a new, real and complex refactoring can be scripted in a few hundred lines of code and within a short timeframe. We believe the key to successfully building such refactorings is the ability to pair existing tools, focused on specific languages, with general-purpose meta-programming languages. Title Identifying extract-method refactoring candidates automatically Abstract Refactoring becomes an essential activity in software development process especially for large and long life projects. CCS Security and privacy Human and societal aspects of security and privacy Economics of security and privacy CCS Security and privacy Human and societal aspects of security and privacy Social aspects of security and privacy CCS Security and privacy Human and societal aspects of security and privacy Privacy protections CCS Security and privacy Human and societal aspects of security and privacy Usability in security and privacy CCS Human-centered computing Human computer interaction (HCI) HCI design and evaluation methods CCS Human-centered computing Human computer interaction (HCI) Interaction paradigms CCS Human-centered computing Human computer interaction (HCI) Interaction devices CCS Human-centered computing Human computer interaction (HCI) HCI theory, concepts and models Title W5: a meta-model for pen-and-paper interaction Abstract Pen-and-Paper Interaction ( NA Title Model-based training: an approach supporting operability of critical interactive systems Abstract Operation of safety critical systems requires qualified operators that have detailed knowledge about the system they are using and how it should be used. Instructional Design and Technology intends to analyze, design, implement, evaluate, maintain and manage training programs. Among the many methods and processes that are currently in use, the first one to be widely exploited was Instructional Systems Development (ISD) which has been further developed in many ramifications and is part of the Systematic Approach to Training (SAT) instructional design family. One of the key features of these processes (at least when they are refined) is the importance of Instructional Task Analysis, particularly the decomposition of a job in its tasks and sub-tasks in order to decide what knowledge and skills must be acquired by the trainee. This paper proposes to leverage this systematic approach using model-based approaches currently used for interactive systems engineering in order to design such training programs and thus to improve human reliability. The paper explains how task and interactive systems modeling can be bound to job analysis to ensure that each trainee meets the performance goals required. Such training ensures proper learning at the three levels of the Skills Rule Knowledge (SRK) levels of Rasmussen's. In the case study we describe the process for building a training program for operators of satellite ground segments, which is based on and compatible with the Ground Systems and Operations ECSS standard. Then, we propose to enhance this process with a) the application of a Systematic Approach to Training and b) the use of both a System Model and an Operator Task Model. The system model is build using the ICO notation while operators' goals and tasks are described using HAMSTERS notation. Title MACS: combination of a formal mixed interaction model with an informal creative session Abstract In this paper, we propose a collaborative design method combining the informal power of creative session and the formal generative power of a mixed interaction model called MACS (Model Assisted Creativity Session). By using a formal notation during creative sessions, interdisciplinary teams systematically explore combinations between the physical and digital spaces and remain focused on the design problem to address. In this paper, we introduce the MACS method principles and illustrate its application on two case studies. Title Buffer automata: a UI architecture prioritising HCI concerns for interactive devices Abstract We introduce an architectural software formalism, Title Combining aspect-oriented modeling with property-based reasoning to improve user interface adaptation Abstract User interface adaptations can be performed at runtime to dynamically reflect any change of context. Complex user interfaces and contexts can lead to the combinatorial explosion of the number of possible adaptations. Thus, dynamic adaptations come across the issue of adapting user interfaces in a reasonable time-slot with limited resources. In this paper, we propose to combine aspect-oriented modeling with property-based reasoning to tame complex and dynamic user interfaces. At runtime and in a limited time-slot, this combination enables efficient reasoning on the current context and on the available user interface components to provide a well suited adaptation. The proposed approach has been evaluated through EnTiMid, a middleware for home automation. Title QUIMERA: a quality metamodel to improve design rationale Abstract With the increasing complexity of User Interfaces (UI) it is more and more necessary to make users understand the UI. We promote a Model-Driven approach to improve the perceived quality through an explicit and observable design rationale. The design rationale is the logical reasons given to justify a designed artifact. The design decisions are not taken arbitrarily, but following some criteria. We propose a Quality Metamodel to justify these decisions along a Model-Driven Engineering approach. Title RIM: risk interaction model for vehicle navigation Abstract Interactive auto-driving systems are used for disabled and elderly persons. In such systems, human errors during operation or interaction could lead to serious consequences during motion. A novel human-robot interaction model, termed risk interaction model (RIM), is proposed for quantitative evaluation of the risk for complex interactive systems in terms of human safety. The risk elements for system-human interaction are defined, and quantitative relations among the elements are formalized based on experimental analysis. Extensive experiments are used to validate RIM. Title Estimation of conversational activation level during video chat using turn-taking information. Abstract In this paper, we discuss the feasibility of estimating the activation level of a conversation by using phonetic and turn-taking features. First, we recorded the voices of conversations of six three-person groups at three different activation levels. Then, we calculated the phonetic and turn-taking features, and analyzed the correlation between the features and the activity level. The analysis revealed that response latency, overlap rate, and speech rate correlate with the activation levels and they are less sensitive to individual deviation. Then, we formulated multiple regression equations, and examined the estimation accuracy using the analyzed data of the six three-person groups. The results demonstrated the feasibility to estimate activation level at approximately 18% root-mean-square error (RMSE). Title SlideDeckFinder: identifying related slide decks based on visual appearance and composition patterns Abstract This paper introduces Title Correlation with aspiration for change: a case study for restoration after natural disaster Abstract In this paper, we present a participatory design (PD) case for public good, CCS Human-centered computing Human computer interaction (HCI) Interaction techniques CCS Human-centered computing Human computer interaction (HCI) Interactive systems and tools CCS Human-centered computing Human computer interaction (HCI) Empirical studies in HCI Title Pattern-driven engineering of interactive computing systems (PEICS) Abstract Since almost one decade HCI pattern languages are one popular form of design knowledge representations which can be used to facilitate the exchange of best practices, knowledge and design experience between the interdisciplinary team members and allow the formalization of different user interface aspects. Since patterns usually describe the rational in which context they should be applied (when), why a certain pattern should be used in a specific use context (why) and how to implement the solution part (how) they are suitable to describe different user interface aspects in a constructive way. But despite intense research activities in the last years, HCI pattern languages still lack in a Besides that, evaluating the effectiveness of a pattern, i.e. when is a pattern a 'good' pattern is an important issue that has to be tackled to fully benefit from HCI patterns and to improve their applicability in future design processes. Title Remembering the stars?: effect of time on preference retrieval from memory Abstract Many recommendation systems rely on explicit ratings provided by their users. Often these ratings are provided long after consuming the item, relying heavily on people's representation of the quality of the item in memory. This paper investigates a psychological process, the "positivity effect", that influences the retrieval of quality judgments from our memory by which pleasant items are being processed and recalled from memory more effectively than unpleasant items. In an offline study on the MovieLens data we used the time between release date and rating date as a proxy for the time between consumption and rating. Ratings for movies tend to increase over time, consistent with the positivity effect. A subsequent online user study used a direct measure of time between rating and consumption, by asking users to rate movies (recently aired on television) and to explicitly report how long ago they watched these movies. In contrast to the offline study we find that ratings tend to decline over time showing reduced accuracy in ratings for items experienced long ago. We discuss the impact these rating dynamics might have on recommender algorithms, especially in cases where a new user has to submit his preferences to a system. Title Inspectability and control in social recommenders Abstract Users of social recommender systems may want to inspect and control how their social relationships influence the recommendations they receive, especially since recommendations of social recommenders are based on friends rather than anonymous "nearest neighbors". We performed an online user experiment (N=267) with a Facebook music recommender system that gives users control over the recommendations, and explains how they came about. The results show that inspectability and control indeed increase users' perceived understanding of and control over the system, their rating of the recommendation quality, and their satisfaction with the system. Title Mobile posture monitoring system to prevent physical health risk of smartphone users Abstract With the widespread use of a smartphone, users tend to use their smartphone for a long period of time in unhealthy postures; bending forward the neck and watching the relatively small screen closely with concentration. If users keep such unhealthy postures for a long time, they are susceptible to musculoskeletal disorders and eye problems such as cervical disc and myopia, respectively. To prevent users from having these diseases, we propose a new methodology to monitor the posture of smartphone users with built-in sensors. The proposed mechanism estimates various values representing user postures like the tilt angle of the neck, viewing distance, and gaze condition of the user, by analyzing sensor data from a front-faced camera, 3-axis accelerometer, orientation sensor, or any combination thereof, and warns the user if estimated values are maintained within the abnormal range over the allowed time. Via the proposed mechanism, users are able to be aware of their unhealthy postures, and then try to correct their postures. Title Personality, genre and videogame play experience Abstract This study explored relationships between personality, videogame preference and gaming experiences. Four hundred and sixty-six participants completed an online survey in which they recalled a recent gaming experience, and provided measures of personality and their gaming experience via the Game Experience Questionnaire (GEQ). Relationships between game genre, personality and gaming experience were found. Results are interpreted with reference to possible implications for a positive impact on wellbeing of videogame play and possible means of improving the breadth of appeal of specific genres. Title Personality and player types in Fallout New Vegas Abstract The aim of this study was to explore the relationship between personality and videogame player types. Study participants completed an online survey that gathered information regarding the individual's personality, via the Big Five Inventory, and player types. The study was focused on understanding this relationship in the context of the action role-playing videogame, Fallout New Vegas (FNV). A relationship between personality and player type was found, specifically with respect to the personality traits of openness to experience and conscientiousness. Title Usable advanced visual interfaces in aviation Abstract Aviation systems represent a rich, difficult, and critical environment for usability. The life critical importance of aviation systems complemented by with their complexity and the diversity and uncertainty of environments in which they operate, place them necessarily on the avant-guard of usability. We describe the broad range of aviation missions, operators, and operating environments and relate these to key dimensions of usability and visual interfaces. We illustrate this concretely with two case examples: ground control stations for remotely piloted aircraft and air traffic management systems. We explore how in both these cases advances in machine autonomy and human control promise to enhance operator situational awareness and control, however, not without challenges. We share guidelines to ensure effective usability to include: require usability as a key performance parameter, architect and design systems and operations that incorporate affordances and fault tolerance, employ standards to increase learnability and interoperability, instrument environments to tailor usability, and assess effects (Maybury 2012). Finally, remaining challenges and future directions are discussed. NA Title Children's knowledge and expectations about robots: a survey for future user-centered design of social robots Abstract This paper seeks to establish a precedent for future development and design of social robots by considering the knowledge and expectations about robots of a group of 296 children. Human-robot interaction experiments were conducted with a Tele-operated anthropomorphic robot, and surveys were taken before and after the experiments. Children were also asked to perform a drawing of a robot. An image analysis algorithm was developed to classify drawings into 4 types: Anthropomorphic Mechanic/Non Mechanic (AM/AnM) and Non-Anthropomorphic Mechanic/Non Mechanic (nAM/nAnM). Image analysis algorithm was used in combination with human classification using a 2oo3 (two out of three) voting scheme to find children's strongest stereotype about robots. Survey and image analysis results suggest that children in general have some general knowledge about robots, and some children even have a deep understanding and expectations for future robots. Moreover, children's strongest stereotype is directed towards mechanical anthropomorphic systems. Title User-centered development of UI elements for selecting items on a digital map designed for heavy rugged tablet PCs in mass casualty incidents Abstract In a Mass Casualty Incident (MCI) time and good management are critical. Currently, the first arriving rescue units perform the triage algorithm on paper instead of a mobile device. By using mobile devices, the patients' triage state and position can be instantly shared through a network. We implemented a map application to visualize this data on a rugged tablet PC that is intended to be used by the Title Future technology oriented scenarios on e-accessibility Abstract This paper presents a set of future scenarios as a part of our study which explores and analyzes the relationships between the emerging ICT landscape in the European societal and economic context, and the development and provision of e-Accessibility, within a perspective of 10 years. Part of our study is the development and validation of various scenarios regarding the impact of new technologies in accessibility. This paper presents some draft scenarios that were produced by combining technologies referred by experts as crucial for the future of eAccessibility. CCS Human-centered computing Interaction design Interaction design process and methods CCS Human-centered computing Interaction design Interaction design theory, concepts and paradigms Title W5: a meta-model for pen-and-paper interaction Abstract Pen-and-Paper Interaction ( NA Title Model-based training: an approach supporting operability of critical interactive systems Abstract Operation of safety critical systems requires qualified operators that have detailed knowledge about the system they are using and how it should be used. Instructional Design and Technology intends to analyze, design, implement, evaluate, maintain and manage training programs. Among the many methods and processes that are currently in use, the first one to be widely exploited was Instructional Systems Development (ISD) which has been further developed in many ramifications and is part of the Systematic Approach to Training (SAT) instructional design family. One of the key features of these processes (at least when they are refined) is the importance of Instructional Task Analysis, particularly the decomposition of a job in its tasks and sub-tasks in order to decide what knowledge and skills must be acquired by the trainee. This paper proposes to leverage this systematic approach using model-based approaches currently used for interactive systems engineering in order to design such training programs and thus to improve human reliability. The paper explains how task and interactive systems modeling can be bound to job analysis to ensure that each trainee meets the performance goals required. Such training ensures proper learning at the three levels of the Skills Rule Knowledge (SRK) levels of Rasmussen's. In the case study we describe the process for building a training program for operators of satellite ground segments, which is based on and compatible with the Ground Systems and Operations ECSS standard. Then, we propose to enhance this process with a) the application of a Systematic Approach to Training and b) the use of both a System Model and an Operator Task Model. The system model is build using the ICO notation while operators' goals and tasks are described using HAMSTERS notation. Title MACS: combination of a formal mixed interaction model with an informal creative session Abstract In this paper, we propose a collaborative design method combining the informal power of creative session and the formal generative power of a mixed interaction model called MACS (Model Assisted Creativity Session). By using a formal notation during creative sessions, interdisciplinary teams systematically explore combinations between the physical and digital spaces and remain focused on the design problem to address. In this paper, we introduce the MACS method principles and illustrate its application on two case studies. Title Buffer automata: a UI architecture prioritising HCI concerns for interactive devices Abstract We introduce an architectural software formalism, Title Combining aspect-oriented modeling with property-based reasoning to improve user interface adaptation Abstract User interface adaptations can be performed at runtime to dynamically reflect any change of context. Complex user interfaces and contexts can lead to the combinatorial explosion of the number of possible adaptations. Thus, dynamic adaptations come across the issue of adapting user interfaces in a reasonable time-slot with limited resources. In this paper, we propose to combine aspect-oriented modeling with property-based reasoning to tame complex and dynamic user interfaces. At runtime and in a limited time-slot, this combination enables efficient reasoning on the current context and on the available user interface components to provide a well suited adaptation. The proposed approach has been evaluated through EnTiMid, a middleware for home automation. Title QUIMERA: a quality metamodel to improve design rationale Abstract With the increasing complexity of User Interfaces (UI) it is more and more necessary to make users understand the UI. We promote a Model-Driven approach to improve the perceived quality through an explicit and observable design rationale. The design rationale is the logical reasons given to justify a designed artifact. The design decisions are not taken arbitrarily, but following some criteria. We propose a Quality Metamodel to justify these decisions along a Model-Driven Engineering approach. Title RIM: risk interaction model for vehicle navigation Abstract Interactive auto-driving systems are used for disabled and elderly persons. In such systems, human errors during operation or interaction could lead to serious consequences during motion. A novel human-robot interaction model, termed risk interaction model (RIM), is proposed for quantitative evaluation of the risk for complex interactive systems in terms of human safety. The risk elements for system-human interaction are defined, and quantitative relations among the elements are formalized based on experimental analysis. Extensive experiments are used to validate RIM. Title Estimation of conversational activation level during video chat using turn-taking information. Abstract In this paper, we discuss the feasibility of estimating the activation level of a conversation by using phonetic and turn-taking features. First, we recorded the voices of conversations of six three-person groups at three different activation levels. Then, we calculated the phonetic and turn-taking features, and analyzed the correlation between the features and the activity level. The analysis revealed that response latency, overlap rate, and speech rate correlate with the activation levels and they are less sensitive to individual deviation. Then, we formulated multiple regression equations, and examined the estimation accuracy using the analyzed data of the six three-person groups. The results demonstrated the feasibility to estimate activation level at approximately 18% root-mean-square error (RMSE). Title SlideDeckFinder: identifying related slide decks based on visual appearance and composition patterns Abstract This paper introduces Title Out of Scandinavia to Asia: adaptability of participatory design in culturally distant society Abstract Participatory design (PD) has historically started and traditionally been conducted in Scandinavian contexts, where participation is an integral part of the social value. In this paper, we report our experiences conducting PD approaches in Japan, where social value systems and understandings of participation differ from Scandinavia. The project shows how Japanese utilize PD to solve an extraordinary, disastrous tsunami situation. We exemplify how negative parameters for participation vanish and new social value is created locally and temporary when certain conditions are fulfilled. We argue that culturally distant societies can reasonably adapt PD and use the most of its essence by providing a localized micro-mechanism for consolidating the conditions. CCS Human-centered computing Interaction design Empirical studies in interaction design CCS Human-centered computing Interaction design Systems and tools for interaction design CCS Human-centered computing Collaborative and social computing Collaborative and social computing theory, concepts and paradigms CCS Human-centered computing Collaborative and social computing Collaborative and social computing design and evaluation methods CCS Human-centered computing Collaborative and social computing Collaborative and social computing systems and tools CCS Human-centered computing Collaborative and social computing Empirical studies in collaborative and social computing CCS Human-centered computing Collaborative and social computing Collaborative and social computing devices CCS Human-centered computing Ubiquitous and mobile computing Ubiquitous and mobile computing theory, concepts and paradigms CCS Human-centered computing Ubiquitous and mobile computing Ubiquitous and mobile computing systems and tools CCS Human-centered computing Ubiquitous and mobile computing Ubiquitous and mobile devices CCS Human-centered computing Ubiquitous and mobile computing Ubiquitous and mobile computing design and evaluation methods CCS Human-centered computing Ubiquitous and mobile computing Empirical studies in ubiquitous and mobile computing CCS Human-centered computing Visualization Visualization techniques CCS Human-centered computing Visualization Visualization application domains CCS Human-centered computing Visualization Visualization systems and tools CCS Human-centered computing Visualization Visualization theory, concepts and paradigms CCS Human-centered computing Visualization Empirical studies in visualization CCS Human-centered computing Visualization Visualization design and evaluation methods CCS Human-centered computing Accessibility Accessibility theory, concepts and paradigms CCS Human-centered computing Accessibility Empirical studies in accessibility CCS Human-centered computing Accessibility Accessibility design and evaluation methods CCS Human-centered computing Accessibility Accessibility technologies CCS Human-centered computing Accessibility Accessibility systems and tools CCS Computing methodologies Symbolic and algebraic manipulation Symbolic and algebraic algorithms CCS Computing methodologies Symbolic and algebraic manipulation Computer algebra systems CCS Computing methodologies Symbolic and algebraic manipulation Representation of mathematical objects CCS Computing methodologies Parallel computing methodologies Parallel algorithms CCS Computing methodologies Parallel computing methodologies Parallel programming languages Title Language virtualization for heterogeneous parallel computing Abstract As heterogeneous parallel systems become dominant, application developers are being forced to turn to an incompatiblemix of low level programming models (e.g. OpenMP, MPI, CUDA, OpenCL). However, these models do little to shield developers from the difficult problems of parallelization, data decomposition and machine-specific details. Most programmersare having a difficult time using these programming models effectively. To provide a programming modelthat addresses the productivity and performance requirements for the average programmer, we explore a domainspecificapproach to heterogeneous parallel programming. We propose language virtualization as a new principle that enables the construction of highly efficient parallel domain specific languages that are embedded in a common host language. We define criteria for language virtualization and present techniques to achieve them.We present two concrete case studies of domain-specific languages that are implemented using our virtualization approach. Title Parallelizing the H.264 decoder on the cell BE architecture Abstract In this paper, we propose parallelization and optimization techniques of the H.264 decoder for the Cell BE processor. We exploit both frame-level parallelism and macroblock pipelining. The major bottleneck in achieving the real-time performance is the entropy decoding stage, CABAC. Our decoder eliminates this bottleneck by exploiting the frame-level parallelism available in the entropy decoding stage. A macroblock software cache and a prefetching technique for the cache are used to facilitate macroblock pipelining. In addition, an asynchronous macroblock buffering technique is used to eliminate the effect of load imbalance between pipeline stages. We evaluate the effectiveness of our approach by implementing a parallel H.264 decoder on an IBM Cell blade server. The evaluation results indicate that our parallel H.264 decoder (with CABAC entropy decoding) on a single Cell BE processor meets the real-time requirement of the full HD standard at level 4.0. Moreover, our decoder also satisfies the real-time requirement at level 4.1 when an additional Cell BE processor is used. Title Scalability versus semantics of concurrent FIFO queues Abstract Maintaining data structure semantics of concurrent queues such as first-in first-out (FIFO) ordering requires expensive synchronization mechanisms which limit scalability. However, deviating from the original semantics of a given data structure may allow for a higher degree of scalability and yet be tolerated by many concurrent applications. We introduce the notion of a Title Parallel and distributed programming extensions for mainstream languages based on pi-calculus Abstract We describe an extension of the Java language with parallel programming primitives inspired by pi-calculus and outline the advantages compared to other parallel programming approaches. Title ACM SRC poster: a portable implementation of the integral histogram in starss Abstract Parallel programming models converge on key concepts. Program syntax avoids explicitly parallel constructs like threads and data dependence guides the computation, as opposed to resource-centric models like MPI or OpenMP. Aside from StarSs, StarPU and recently the MAGMA and PLASMA projects encapsulate computation on data blocks in tasks. These are scheduled dynamically via a TDG. We intent to demonstrate the use of StarSs in the development of the Integral Histogram (IH) and analyze the application on SMP, Cell/B.E. and GPU. As opposed to the applications for StarPU, MAGMA and PLASMA, which so far have been limited to numerical linear algebra kernels. IH is a recently proposed preprocessing technique that constructs the histogram for rectangular regions in constant time (for e.g. object recognition, content-based image retrieval, segmentation, detection and tracking). To the best of our knowledge, IH in StarSs is the first parallel implementation of this algorithm in the literature. Title Poster: MINT: a fast and green synchronization technique Abstract Shadow Thread library is designed and implemented to utilize SMT to tolerate the latencies of memory and communication. It is desired to have new thread library with fast thread synchronization and low electric power consumption. In this paper, a novel thread synchronization technique, named Title Poster: High-level, one-sided programming models on MPI: a case study with global arrays and NWChem Abstract Global Arrays (GA) is popular high-level parallel programming model that provides data and computation management facilities to the NWChem computational chemistry suite. GA's global-view data model is supported by the ARMCI partitioned global address space runtime system, which traditionally is implemented natively on each supported platform in order to provide the best performance. The industry standard Message Passing Interface (MPI) also provides one-sided functionality and is available on virtually every supercomputing system. We present the first high performance, portable implementation of ARMCI using MPI one-sided communication. We interface the existing GA infrastructure with ARMCI-MPI and demonstrate that this approach reduces the amount of resources consumed by the runtime system, provides comparable performance, and enhances portability for applications like NWChem. Title Poster: a GPU-based architecture for real-time data assessment at synchrotron experiments Abstract X-ray tomography has been proven to be a valuable tool for understanding internal, otherwise invisible, mechanisms in biology and other fields. Recent advances in digital detector technology enabled investigation of dynamic processes in 3D with a temporal resolution down to the milliseconds range. Unfortunately it requires computationally intensive reconstruction algorithms with long post-processing times. We have optimized the reconstruction software employed at the micro-tomography beamlines at KIT and ESRF. Using a 4 stage pipelined architecture and the computational power of modern graphic cards, we were able to reduce the processing time by a factor 75 with a single server. The time required to reconstruct a typical 3D image is reduced down to several seconds only and online visualization is possible for the first time. Title Poster: performance modeling and computational quality of service (CQoS) in synergia2 accelerator simulations Abstract High-precision accelerator modeling is essential for particle accelerator design and optimization. However, this modeling presents a significant computational challenge. We discuss performance modeling of and computational quality of service (CQoS) results from Synergia2, an advanced particle accelerator simulation code developed under the ComPASS SciDAC-2 accelerator modeling project. Understanding both strong and weak scaling behavior is crucial both to designing code optimizations and to adding new functionality. We ported Synergia2 both to linux clusters and to Surveyor, an IBM® Blue Gene®/P (BG/P) system at Argonne[5]. We then performed detailed profiling which led to a heuristic performance model for a benchmark simulation. The 3-D Hockney space-charge solver was modeled in detail using a component-based approach as it represents the main scaling challenge. These processes helped identify performance bottlenecks and code optimizations. We applied CQoS methods to maintain optimal memory layout and used a component approach to maintain the required CQoS in the space-charge solver. Title A nonblocking set optimized for querying the minimum value Abstract CCS Computing methodologies Artificial intelligence Natural language processing CCS Computing methodologies Artificial intelligence Knowledge representation and reasoning CCS Computing methodologies Artificial intelligence Planning and scheduling CCS Computing methodologies Artificial intelligence Search methodologies CCS Computing methodologies Artificial intelligence Control methods CCS Computing methodologies Artificial intelligence Philosophical/theoretical foundations of artificial intelligence CCS Computing methodologies Artificial intelligence Distributed artificial intelligence CCS Computing methodologies Artificial intelligence Computer vision CCS Computing methodologies Machine learning Learning paradigms CCS Computing methodologies Machine learning Learning settings CCS Computing methodologies Machine learning Machine learning approaches CCS Computing methodologies Machine learning Machine learning algorithms CCS Computing methodologies Machine learning Cross-validation CCS Computing methodologies Modeling and simulation Model development and analysis CCS Computing methodologies Modeling and simulation Simulation theory CCS Computing methodologies Modeling and simulation Simulation types and techniques CCS Computing methodologies Modeling and simulation Simulation support systems CCS Computing methodologies Modeling and simulation Simulation evaluation Title The reversing magnetic field of planet earth Abstract The magnetic field generated in the fluid metallic core of planet Earth is shown. Numerical simulations of the dynamo mechanism, such as this one, exhibit polarity reversals, whereby the north pole moves by 180 degrees; this mimics the behavior documented many times within the geological record. Our simulation solves the equations of momentum transfer, heat transfer and electrodynamics in an electrically conducting and rapidly-rotating fluid at each point in time. High temperatures in the central part of the core drive thermal convection. The total simulation is equivalent to approximately 40,000 years on Earth. Our movie shows the magnetic field lines that enter and exit the core. High magnetic field strength is shown by red and yellow colors, and lower strengths by blue. On each field line we place a small compass needle with red and white ends, which orient itself in the direction of the field. Title Using the XSEDE supercomputing and visualization resources to improve tornado prediction using data mining Abstract In this paper we introduce the use of XSEDE resources and mathematical models for the simulation of tornadoes, as well as novel techniques for analyzing the results of these simulations. Title Darwinian rivers: evolving stream topographies to match hyporheic residence time distributions Abstract We employed genetic algorithms to investigate the relationship between stream topographies and their associated hyporheic residence time distributions. A hyporheic residence time is the time it takes a water particle to enter the sediments below a stream, travel through the sediment, and re-enter the surface water of the stream. This subsurface journey affects stream chemistry and water quality, and increased knowledge of this process could be helpful in addressing the environmental problems caused by excess nutrients and waterborne pollutants in riverine ecosystems. We used a multi-scale two-dimensional model, lightly adapted from three previous models, to calculate residence time distributions from system characteristics. Our primary goal is the investigation of the "RTD inverse problem" - discovering stream topographies that would generate a specified target residence time distribution (RTD). We used genetic algorithms to evolve the shape of stream topographies (represented by Fourier series) to discover shapes that yield RTDs that closely match the target RTD. Our contributions are: a) the specification of the RTD inverse problem, b) evidence that genetic algorithms provide an effective method for approaching this problem, and c) the discovery of some unanticipated patterns among the evolved topographies. This early work seems promising and should encourage further applications of evolutionary computing in this area, with eventual application to stream restoration projects. Title The effects of common random numbers on stochastic kriging metamodels Abstract Ankenman et al. introduced stochastic kriging as a metamodeling tool for representing stochastic simulation response surfaces, and employed a very simple example to suggest that the use of Common Random Numbers (CRN) degrades the capability of stochastic kriging to predict the true response surface. In this article we undertake an in-depth analysis of the interaction between CRN and stochastic kriging by analyzing a richer collection of models; in particular, we consider stochastic kriging models with a linear trend term. We also perform an empirical study of the effect of CRN on stochastic kriging. We also consider the effect of CRN on metamodel parameter estimation and response-surface gradient estimation, as well as response-surface prediction. In brief, we confirm that CRN is detrimental to prediction, but show that it leads to better estimation of slope parameters and superior gradient estimation compared to independent simulation. Title Confidence intervals for quantiles when applying variance-reduction techniques Abstract Quantiles, which are also known as values-at-risk in finance, frequently arise in practice as measures of risk. This article develops asymptotically valid confidence intervals for quantiles estimated via simulation using variance-reduction techniques (VRTs). We establish our results within a general framework for VRTs, which we show includes importance sampling, stratified sampling, antithetic variates, and control variates. Our method for verifying asymptotic validity is to first demonstrate that a quantile estimator obtained via a VRT within our framework satisfies a Bahadur-Ghosh representation. We then exploit this to show that the quantile estimator obeys a central limit theorem (CLT) and to develop a consistent estimator for the variance constant appearing in the CLT, which enables us to construct a confidence interval. We provide explicit formulae for the estimators for each of the VRTs considered. Title A computational investigation of wireless sensor network simulation Abstract A wireless sensor network (WSN) is a dynamic system of interacting sensor nodes that must be able to combine its understanding of the physical world with its computational and control functions and operate with constrained resources. Simulation involves addressing a wide range of WSN issues such as limited energy reserves, computation power, communication capabilities, and automated sensor nodes. In this project, simulation is used to evaluate the performance of five (5) simulation tools -- ATEMU, AVRORA, Castalia, JProwler and SENSE -- where each will be subjected to similar development, implementation and testing using a local graphics processing unit (GPU) cluster. This project aims to provide researchers and developers with beneficial computational data on the performance of specific benchmark algorithms and source code in a simulated WSN environment. Title Interactive uncertainty analysis Abstract Humans have difficulty evaluating the effects of uncertainty on schedules. People often mitigate the effects of uncertainty by adding slack based on experience and non-stochastic analyses such as the critical path method (CPM). This is costly as it leads to longer than necessary schedules, and can be ineffective without a clear understanding of where slack is needed. COMPASS is an interactive real-time tool that analyzes schedule uncertainty for a stochastic task network. An important feature is that it concurrently calculates stochastic critical paths and critical tasks. COMPASS visualizes this information on top of a traditional Gantt view, giving users insight into how delays caused by uncertain durations propagate down the schedule. Evaluations with 10 users show that users can use COMPASS to answer a variety of questions about the possible evolutions of a schedule (e.g., what is the likelihood that all activities will complete before a given date?) Title Examples of in transit visualization Abstract One of the most pressing issues with petascale analysis is the transport of simulation results data to a meaningful analysis. Traditional workflow prescribes storing the simulation results to disk and later retrieving them for analysis and visualization. However, at petascale this storage of the full results is prohibitive. A solution to this problem is to run the analysis and visualization concurrently with the simulation and bypass the storage of the full results. One mechanism for doing so is Title Petaflop biofluidics simulations on a two million-core system Abstract We present a computational framework for multi-scale simulations of real-life biofluidic problems. The framework allows to simulate suspensions composed by hundreds of millions of bodies interacting with each other and with a surrounding fluid in complex geometries. We apply the methodology to the simulation of blood flow through the human coronary arteries with a spatial resolution comparable with the size of red blood cells, and physiological levels of hematocrit (the red blood cell volume fraction). The simulation exhibits excellent scalability on a cluster of 4000 M2050 Nvidia GPUs and achieves close to 1 Petaflop aggregate performance, which demonstrates the capability to predicting the evolution of biofluidic phenomena of clinical significance. The combination of novel mathematical models, computational algorithms, hardware technology, code tuning and optimization required to achieve these results are presented. Title Analysis and design of IEEE 802.16 uplink scheduling algorithms and proposing the IRA algorithm for rtPS QoS class Abstract Scheduling algorithms in WiMAX are of great importance due to their effect in delivering a high speed broadband while satisfying the traffic Quality of Service (QoS) constraints. A well designed scheduling algorithm should carefully deal with throughput maximization, delay constraints satisfaction and maintaining fairness among subscribers. This paper focuses on uplink (UL) scheduling effect on the performance of real time Polling Services (rtPS) QoS class performance using OFDM as a physical layer where rtPS is a service class for variable bit rate (VBR) data such as MPEG compressed video. Ensuring QoS constraints for rtPS class is challenging due to critical delay constraints and throughput requirements. The paper proposes an uplink scheduling algorithm called the Instantaneously Replacing Algorithm (IRA). The algorithm mainly schedules connections based on their Signal to Noise Ratio (SNR) but instantaneously replaces high SNR connections with connections that may violate their intended QoS requirements. The proposed algorithm is analyzed in the paper using an NS-2 simulation model. Comparing to another set of UL schedulers, simulation results show that the proposed algorithm enhances QoS Satisfaction in the network as it tends to minimize the delay and at the same time distribute the network resources in a fair manner among Subscriber Stations (SSs) while maintaining a throughput comparable to that achieved using SNR based approaches. CCS Computing methodologies Computer graphics Animation CCS Computing methodologies Computer graphics Rendering CCS Computing methodologies Computer graphics Image manipulation CCS Computing methodologies Computer graphics Graphics systems and interfaces CCS Computing methodologies Computer graphics Image compression Title Length-preserving bit-stream-based JPEG encryption Abstract We propose a new method to encrypt baseline JPEG bit streams by selective Huffman code word swapping and coefficient value scrambling based on AES encryption. Furthermore, we show that our approach preserves the length of the bit stream while being completely format-compliant. In contrast to most existing approaches, no recompression is necessary as the encryption is applied directly to the bit stream. In addition, we assess the effort required for brute-force and known-plaintext attacks on pictures encrypted with our approach, showing that both are practically infeasible. Title Scalable depth map coding for 3D video using contour information Abstract In this paper, scalable depth map coding method is proposed to accomplish better coding performance. First of all, in order to use correlation between color video and depth map, a structure in SVC is applied to 3DVC. As the depth map is mainly used to synthesize videos, corrupted contour region can damage the overall quality of video. We hereby adapt a new differential quantization method when separating the contour region. The experimental results show that the proposed method can improve video quality by 0.07~0.49dB, when compared to the reference software, JSVM 9.19. Title Distributed video coding with compressive measurements Abstract This paper presents a novel distributed video coding (DVC) scheme using compressive sensing (CS) that achieves low-complexity for encoding and efficient signal sensing. Most CS recovery algorithms rely only on signal sparsity. Yet, under DVC architecture, additional statistical characterization of the signal is available, which offers the potential for more precise CS recovery. First, a set of random measurements are acquired and transmitted to the decoder. The decoder then exploits the statistical characterization of the signal and generates the side information (SI). Finally, utilizing the SI, a Bayesian inference using belief propagation (BP) decoding is performed for signal recovery. The proposed CS-DVC system offers a more direct way of signal acquisition and the potential for more precise estimation of the signal from random measurements. Experimental results indicate that SI can improve the signal reconstruction quality in comparison with a CS recovery algorithm that relies only on the sparsity. Title Real-time decoding for LDPC based distributed video coding Abstract Wyner-Ziv (WZ) video coding -- a particular case of distributed video coding (DVC), is well known for its low-complexity encoding and high-complexity decoding characteristics. Although some works have been made in recent years, especially for improving the coding efficiency, most reported WZ codecs have high time delay in the decoder, which hinders its practical values for applications with critical timing constraint. In this paper, a fully parallelized sum-product algorithm (SPA) for low density parity check accumulate (LDPCA) codes is proposed and realized through Compute Unified Device Architecture (CUDA) based on General-Purpose Graphics Processing Unit (GPGPU). Simulation results show that, through our work, QCIF (surveillance) videos can be decoded in real-time with extremely high quality and without rate-distortion (RD) performance loss. Title High efficient distributed video coding with parallelized design for cloud computing Abstract In this work, by combining coding tools developed in recent literatures on transform domain WZ video coding with some newly developed modules on both encoding and decoding sides, an efficient and practical WZ video coding architecture, dubbed as DIStributed video coding with PArallelized design for Cloud computing (DISPAC), is proposed to better the corresponding rate-distortion (RD) performance. Another unique feature of DISPAC, lies in the parallelizability of the modules used by its WZ decoder which increased the decoding speed largely. Experimental results conducted on an emulated Could computing environment reveal that DISPAC codec can gain up to 3.6 dB in the RD measures and 60.97 times faster in the decoding speed as compared with the-state-of-art WZ video codec, respectively Title Geometric distortion measurement for shape coding: A contemporary review Abstract Geometric distortion measurement and the associated metrics involved are integral to the Rate Distortion (RD) shape coding framework, with importantly the efficacy of the metrics being strongly influenced by the underlying measurement strategy. This has been the catalyst for many different techniques with this article presenting a comprehensive review of geometric distortion measurement, the diverse metrics applied, and their impact on shape coding. The respective performance of these measuring strategies is analyzed from both a RD and complexity perspective, with a recent distortion measurement technique based on arc-length-parameterization being comparatively evaluated. Some contemporary research challenges are also investigated, including schemes to effectively quantify shape deformation. Title Building multimedia security applications in the MPEG reconfigurable video coding (RVC) framework Abstract Although used by most of system developers, imperative languages are known for not being able to provide easily reconfigurable, platform independent and strictly modular applications. ISO/IEC has recently developed a new video coding standard called Reconfigurable Video Coding (RVC), with the objective of providing modular and concurrent specifications of complex video codecs that constitute a better starting point for implementation of applications using video compression. Multimedia security applications are traditionally developed in imperative languages mainly because the required multimedia codecs were only available in specification and implementations based on imperative languages. Therefore, aside from the technical challenges inherited from multimedia codecs, multimedia security applications also face a number of other challenges which are only specific to them. Since a number of multimedia codecs are already available in the RVC framework, multimedia security applications can now also be developed using this new development framework. This paper explains why the RVC framework approach can be used to efficiently overcome those technical challenges better than existing imperative languages. In addition, the paper demonstrates how the RVC framework can be used to quickly develop multimedia security applications by presenting some examples including a joint H.264/AVC video encryption-encoding system, a joint JPEG image encryption-encoding system and a image watermarking system in JPEG compressed-domain. Title Scalable video transmission: packet loss induced distortion modeling and estimation Abstract To provide enhanced multimedia services for heterogeneous networks and terminal devices, Scalable Video Coding (SVC) has been developed to embed different quality of video in a single bitstream. Similar to classical compressed video transmission, different packets of a video bitstream have different impacts on received video quality. Therefore, distortion modeling and estimation are necessary in designing a robust video transmission strategy under various network conditions. In the paper, we present the first scheme of packet loss induced distortion modeling and estimation in SVC transmission. The proposed scheme is applicable to numerous video communication and networking scenarios in which accurate distortion information can be utilized to enhance the performance of video transmission. One major challenge in scalable video distortion estimation is due to the adoption of more complicated prediction structure in SVC, which makes the tracking of error propagation much more difficult than the non-scalable encoded video. In this research, we tackle such challenge by systematically tracking the propagation of errors under various prediction trajectories. Supplemental information about the compressed video is embedded into data packets to substantially simplify the modeling and estimation. Moreover, with supplemental data of inter prediction information, distortion estimation can be processed without parsing video bitstream which results in much lower computation and memory cost. With negligible effects on the data size, experimental results show that the proposed scheme is able to track and estimate the distortion with very high accuracy. This first ever scalable video transmission distortion modeling and estimation scheme can be deployed at either gateways or receivers because of its low computation and memory cost. Title Performance measurement for a wavelet transform-based video compression Abstract A wavelet transform-based video compression algorithm consists of i) 3D wavelet transform ii) Quantization, and iii) Coding. Since wavelet analysis has similarities with characteristics of human visual system, it is desirable to use a quality metric that agrees with human perceptual evaluation. One of the widely used quality metric for the reconstructed image and video is the peak signal to noise ratio (PSNR). The disadvantage of PSNR is that the result does not correlate very well with subjective or human evaluation of the perceived quality. The Structural Similarity (SSIM) is a quality metric that quantifies quality based on the changes on the structural information variation of the image due to error signal rather than the visibility of the error signal and performs better than PSNR in terms of perceptual correlation with human observer [17]. This paper describes an implementation of a 3D wavelet transform-based video compression model and a new video quality metric which is an extension of the SSIM to the third dimension. Experiments show that the new metric is perceptually more accurate than the PSNR for measuring video quality. Title Low bit rate video processing algorithm Abstract This project aims at low bit rate videos. There are various techniques available like MPEG 4, WMV, and H.264/AVC which does the same. In this project H.264/AVC has been modified to further reduce the bit rate of the videos. Quality lowers down a bit but the bit rate which is of utmost important criteria for the limited bandwidth factor is achieved. CCS Computing methodologies Computer graphics Shape modeling CCS Computing methodologies Distributed computing methodologies Distributed algorithms CCS Computing methodologies Distributed computing methodologies Distributed programming languages Title From high-level component-based models to distributed implementations Abstract Although distributed systems are widely used nowadays, their implementation and deployment is still a time-consuming, error-prone, and hardly predictive task. In this paper, we propose a methodology for producing automatically efficient and correct-by-construction distributed implementations by starting from a high-level model of the application software in BIP. BIP (Behavior, Interaction, Priority) is a component-based framework with formal semantics that rely on multi-party interactions for synchronizing components. Our methodology transforms arbitrary BIP models into Send/Receive BIP models, directly implementable on distributed execution platforms. The transformation consists of (1) breaking atomicity of actions in atomic components by replacing strong synchronizations with asynchronous Send/Receive interactions; (2) inserting several distributed controllers that coordinate execution of interactions according to a user-defined partition, and (3) augmenting the model with a distributed algorithm for handling conflicts between controllers preserving observational equivalence to the initial models. Currently, it is possible to generate from Send/Receive models stand-alone C++ implementations using either TCP sockets for conventional communication, or MPI implementation, for deployment on multi-core platforms. This method is fully implemented. We report concrete results obtained under different scenarios. Title Task-level analysis for a language with async/finish parallelism Abstract The task level of a program is the maximum number of tasks that can be available (i.e., not finished nor suspended) simultaneously during its execution for any input data. Static knowledge of the task level is of utmost importance for understanding and debugging parallel programs as well as for guiding task schedulers. We present, to the best of our knowledge, the first static analysis which infers safe and precise approximations on the task level for a language with async-finish parallelism. In parallel languages, async and finish are basic constructs for, respectively, spawning tasks and waiting until they terminate. They are the core of modern, parallel, distributed languages like X10. Given a (parallel) program, our analysis returns a task-level upper bound, i.e., a function on the program's input arguments that guarantees that the task level of the program will never exceed its value along any execution. Our analysis provides a series of useful (over)-approximations, going from the total number of tasks spawned in the execution up to an accurate estimation of the task level. Title Byzantine agreement with homonyms Abstract So far, the distributed computing community has either assumed that all the processes of a distributed system have distinct identifiers or, more rarely, that the processes are anonymous and have no identifiers. These are two extremes of the same general model: namely, We show that having 3 Title Optimal-time adaptive strong renaming, with applications to counting Abstract We give two new randomized algorithms for strong renaming, both of which work against an adaptive adversary in asynchronous shared memory. The first uses repeated sampling over a sequence of arrays of decreasing size to assign unique names to each of Title A scalability benchmark suite for Erlang/OTP Abstract Programming language implementers rely heavily on benchmarking for measuring and understanding performance of algorithms, architectural designs, and trade-offs between alternative implementations of compilers, runtime systems, and virtual machine components. Given this fact, it seems a bit ironic that it is often more difficult to come up with a good benchmark suite than a good implementation of a programming language. This paper presents the main aspects of the design and the current status of bencherl, a publicly available scalability benchmark suite for applications written in Erlang. In contrast to other benchmark suites, which are usually designed to report a particular performance point, our benchmark suite aims to assess Title A meta-scheduler for the par-monad: composable scheduling for the heterogeneous cloud Abstract Modern parallel computing hardware demands increasingly specialized attention to the details of scheduling and load balancing across heterogeneous execution resources that may include GPU and cloud environments, in addition to traditional CPUs. Many existing solutions address the challenges of particular resources, but do so in isolation, and in general do not compose within larger systems. We propose a general, composable abstraction for execution resources, along with a continuation-based meta-scheduler that harnesses those resources in the context of a deterministic parallel programming library for Haskell. We demonstrate performance benefits of combined CPU/GPU scheduling over either alone, and of combined multithreaded/distributed scheduling over existing distributed programming approaches for Haskell. Title Parallel PageRank computation using GPUs Abstract Fast & efficient computing of web rank scores is a necessary issue of search engines today. Because of the enormous size of data and the dynamic nature of World Wide Web, this computation is generally executed on large web graphs (to billions webpages) and requires refreshing quite often, so it becomes a challenging task. In this paper, we propose an efficient method for computing PageRank score -- a Google ranking method based on analyzing the link structure of the Web on graphics processing units (GPUs). We have employed a slightly modification of a storage data format called binary 'link structure file' which inspirited from [2] for storing the web graph data. We then divided the PageRank calculating phases into parallel operations for exploiting the computing power of the graphics cards. Our program was written in CUDA language to experiment on a system equipped two double NVIDIA GeForce GTX 295 graphics cards, using two real datasets which were crawled from Vietnamese sites containing 7 million pages, 132 million links and 15 million pages, 200 million links, respectively. The experimental results showed that the computation speed increase from 10 to 20 times when compared to a CPU Intel Q8400 at 2.67 GHz based version, on both datasets. Our method can also scale up well for larger web graphs. Title Message-driven FP-growth Abstract Frequent itemset mining finds frequently occurring itemsets in transactional data. This is applied to diverse problems such as decision support, selective marketing, financial forecast and medical diagnosis. The One of the best algorithms for doing frequent itemset mining is the known FP-growth (Frequent Patterns growth). We develop a cloud-enabled algorithmic variant for frequent itemset mining that scales with very little communication and computational overhead and even, with only one worker node, is faster than FP-growth. We develop the concept of a Title Virtual heritage to go Abstract In this paper we show our conceptual approach of how easy it can be to develop web apps that provide real-time 3D support, behave like native apps and run platform independently on smartphones, tablets (e.g., iPad), and on desktop computers. This reduces development efforts while moving to a distributed application model. The concept is completely based on standard web technologies like HTML5, CSS3, DOM scripting, and Ajax. 3D rendering happens entirely on the client-side by utilizing X3DOM and WebGL respectively. In the context of virtual museums web apps can be used to give visitors and also experts such as curators the possibility to examine virtual heritage objects. By interacting with the 3D model more details can be explored, additional information in form of metadata and annotations can be obtained and also created, and finally the navigation to external resources is supported, too. It is also possible to inspect related objects of similar type, even if they are situated in locations that are far away. Title Faster randomized consensus with an oblivious adversary Abstract Two new algorithms are given for randomized consensus in a shared-memory model with an oblivious adversary. Each is based on a new construction of a conciliator, an object that guarantees termination and validity, but that only guarantees agreement with constant probability. The first conciliator assumes unit-cost snapshots and achieves agreement among CCS Computing methodologies Concurrent computing methodologies Concurrent programming languages Title Electronic poster: parallel algorithms for high accuracy NC milling simulation Abstract In the present work, we demonstrate multithreaded algorithms for high-accuracy NC milling simulation. Our approach to simulation - Boolean differences between a set of analytic or procedural, signed, Euclidiean distance fields - is able to represent a milled workpiece, produced from hundreds-of-thousands of milling instructions, in under 50MB of space to an accuracy of 1μm; however, computationally intensive ray-casting limits rendering and editing performance (ie, interactivity). To increase interactivity, we developed a master-workers thread-manager that could be integrated into the existing code-base with minimal changes. In our poster, we describe the details of our system, thread-manager, and approach to dividing the rendering/editing algorithms into units-of-work that are dispatched for execution by the thread-manager. Our experiments reveal performance gains near unity in the number of available cores (eg, 77-97% and >98% speedup per additional core for rendering and editing, respectively). Title On the power of hardware transactional memory to simplify memory management Abstract Dynamic memory management is a significant source of complexity in the design and implementation of practical concurrent data structures. We study how hardware transactional memory (HTM) can be used to simplify and streamline memory reclamation for such data structures. We propose and evaluate several new HTM-based algorithms for the "Dynamic Collect" problem that lies at the heart of many modern memory management algorithms. We demonstrate that HTM enables simpler and faster solutions, with better memory reclamation properties, than prior approaches. Despite recent theoretical arguments that HTM provides no worst-case advantages, our results support the claim that HTM can provide significantly better common-case performance, as well as reduced conceptual complexity. Title Poster: hybrid parallelization of a realistic heart model Abstract Heart failure is a major health problem, not only for the number of people affected (about five million in Europe alone) but also because of the direct and indirect costs for its treatment. A thorough understanding of the complex electrical activation system that triggers the mechanical contraction is a prerequisite for developing effective treatment strategies. Full-heart simulations are an indispensable tool to study the effect of molecular-level or tissue-level changes on clinical measurements [2]. Cardiac electrical activity originates in the millions of ion channels and pumps that are located in the outer membrane of each cardiac muscle cell. We denote the macroscopic ionic current density by I Title Parallel discrete event simulation with Erlang Abstract Discrete Event Simulation (DES) is a widely used technique in which the state of the simulator is updated by events happening at discrete points in time (hence the name). DES is used to model and analyze many kinds of systems, including computer architectures, communication networks, street traffic, and others. Parallel and Distributed Simulation (PADS) aims at improving the efficiency of DES by partitioning the simulation model across multiple processing elements, in order to enable larger and/or more detailed studies to be carried out. The interest on PADS is increasing since the widespread availability of multicore processors and affordable high performance computing clusters. However, designing parallel simulation models requires considerable expertise, the result being that PADS techniques are not as widespread as they could be. In this paper we describe ErlangTW, a parallel simulation middleware based on the Time Warp synchronization protocol. ErlangTW is entirely written in Erlang, a concurrent, functional programming language specifically targeted at building distributed systems. We argue that writing parallel simulation models in Erlang is considerably easier than using conventional programming languages. Moreover, ErlangTW allows simulation models to be executed either on single-core, multicore and distributed computing architectures. We describe the design and prototype implementation of ErlangTW, and report some preliminary performance results on multicore and distributed architectures using the well known PHOLD benchmark. Title A scalability benchmark suite for Erlang/OTP Abstract Programming language implementers rely heavily on benchmarking for measuring and understanding performance of algorithms, architectural designs, and trade-offs between alternative implementations of compilers, runtime systems, and virtual machine components. Given this fact, it seems a bit ironic that it is often more difficult to come up with a good benchmark suite than a good implementation of a programming language. This paper presents the main aspects of the design and the current status of bencherl, a publicly available scalability benchmark suite for applications written in Erlang. In contrast to other benchmark suites, which are usually designed to report a particular performance point, our benchmark suite aims to assess Title A meta-scheduler for the par-monad: composable scheduling for the heterogeneous cloud Abstract Modern parallel computing hardware demands increasingly specialized attention to the details of scheduling and load balancing across heterogeneous execution resources that may include GPU and cloud environments, in addition to traditional CPUs. Many existing solutions address the challenges of particular resources, but do so in isolation, and in general do not compose within larger systems. We propose a general, composable abstraction for execution resources, along with a continuation-based meta-scheduler that harnesses those resources in the context of a deterministic parallel programming library for Haskell. We demonstrate performance benefits of combined CPU/GPU scheduling over either alone, and of combined multithreaded/distributed scheduling over existing distributed programming approaches for Haskell. Title Parallel computing: thoughts following a four-year tour of academic outreach Abstract Title A static analysis tool using a three-step approach for data races in HPC programs Abstract Multicore processors are becoming dominant in the high performance computing (HPC) area, so multithread programming with OpenMP is becoming a key to good performance on such processors, though debugging problems remain. In particular, it is difficult to detect data races among threads with nondeterministic results, thus calling for tools to detect data races. Because HPC programs tend to run for long periods, detection tools that do not need to run the target programs are strongly preferred. We developed a static program analysis tool to detect data races in OpenMP loops in FORTRAN programs. Programmers can quickly use the tool at compile time without executing the target program. Because static analysis tools tend to report many false positives, we counted the false positives in some large applications to assess the utility and limits of static analysis tools. We have devised a new approach to detect data races. Our approach combines existing program analysis methods with a new analysis. We experimented with NAS parallel benchmarks and two real applications, GTC for plasma physics and GFMC for nuclear physics. Our new analysis method also reduces number of reported candidates from totally 97 to 33 in these applications. We found 13 previously unknown bugs out of 33 candidates reported by our prototype. Our analysis is fast enough for practical use, since the analysis time for the NAS parallel benchmark was shorter than the compilation time (18.5 seconds compared to 33.0 seconds). Title Brief announcement: there are plenty of tasks weaker than perfect renaming and stronger than set agreement Abstract In the asynchronous Title Brief announcement: increasing the power of the iterated immediate snapshot model with failure detectors Abstract This short paper shows how to capture failure detectors so that the base asynchronous read/wite model and the distributed iterated model have the same computational power when both are enriched with the same failure detector. To that end it introduces the notion of a "strongly correct" process and presents simulations that prove the computational equivalence when both models are enriched with the same failure detector. Interestingly, these simulations, which work for a large family of failure detector classes, can be easily extended to the case where the wait-freedom requirement is replaced by the notion of CCS Computing methodologies Concurrent computing methodologies Concurrent algorithms CCS Applied computing Electronic commerce Digital cash Title Liquidity in credit networks: a little trust goes a long way Abstract Credit networks represent a way of modeling trust between entities in a network. Nodes in the network print their own currency and trust each other for a certain amount of each other's currency. This allows the network to serve as a decentralized payment infrastructure---arbitrary payments can be routed through the network by passing IOUs between trusting nodes in their respective currencies---and obviates the need for a common currency. Nodes can repeatedly transact with each other and pay for the transaction using trusted currency. A natural question to ask in this setting is: how long can the network sustain liquidity, i.e. how long can the network support the routing of payments before credit dries up? We answer this question in terms of the long term failure probability of transactions for various network topologies and credit values. We prove that the transaction failure probability is independent of the path along which transactions are routed. We show that under symmetric transaction rates, the transaction failure probability in a number of well-known graph families goes to zero as the size, density or credit capacity of the network increases. We also show via simulations that even networks of small size and credit capacity can route transactions with high probability if they are well-connected. Further, we characterize a centralized currency system as a special type of a star network (one where edges to the root have infinite credit capacity, and transactions occur only between leaf nodes) and compute the steady-state transaction failure probability in a centralized system. We show that liquidity in star networks, complete graphs and Erdos-Renyi networks is comparable to that in equivalent centralized currency systems; thus we do not lose much liquidity in return for their robustness and decentralized properties. Title PSP: private and secure payment with RFID Abstract RFID can be used for a variety of applications, e.g., to conveniently pay for public transportation. However, achieving security and privacy of payment is challenging due to the extreme resource restrictions of RFID tags. In this paper, we propose PSP -- a secure, RFID-based protocol for privacy-preserving payment. Similar to traditional electronic cash, the user of a tag can pay access to a metro using his tag and so called coins of a virtual currency. With PSP, tags do not need to store valid coins, but generate them on the fly. Using Bloom filters, readers can verify the validity of generated coins offline. PSP guarantees privacy such that neither the metro nor an adversary can reveal the identity of a user or link subsequent payments. PSP is secure against invention and overspending of coins, and can reveal the identity of users trying to doublespend coins. Still, PSP is lightweight: it requires only a hash function and few bytes of non-volatile memory the tag. Title From meiwaku to tokushita!: lessons for digital money design from japan Abstract Based on ethnographically-inspired research in Japan, we report on people's experiences using digital money payment systems that use Sony's FeliCa near-field communication smartcard technology. As an example of ubiquitous computing in the here and now, the adoption of digital money is found to be messy and contingent, shot through with cultural and social factors that do not hinder this adoption but rather constitute its specific character. Adoption is strongly tied to Japanese conceptions of the aesthetic and moral virtue of smooth flow and avoidance of commotion, as well as the excitement at winning something for nothing. Implications for design of mobile payment systems stress the need to produce open-ended platforms that can serve as the vehicle for multiple meanings and experiences without foreclosing such possibilities in the name of efficiency. Title Robust DWT-SVD domain image watermarking: embedding data in all frequencies Abstract Title The infonomics workshop on electronic market design Abstract Title Internet based auctions: a survey on models and applications Abstract Title The fifth International conference on autonomous agents: an E-commerce perspective Abstract Title Efficiency and price discovery in multi-item auctions Abstract Title Issues in the law of e-commerce Abstract CCS Applied computing Electronic commerce E-commerce infrastructure Title Empowerment of rural farmers through information sharing using inexpensive technologies Abstract This paper discusses how to empower rural farmers to do business by means of inexpensive mobile technologies. In particular, the aim is to take advantage of the inexpensive features of low-end mobile phones to access market related information and to allow farmers to promote their commodities competitively. The research targeted rural Transkei farmers in the Eastern Cape. The farmers' requirements were identified and a prototype for a low- as well as high-end mobile environment was designed to address these requirements. The following features were effected: registration of users, posting of commodities, retrieval of information, and communication with others. The access to the system is through a website (on the phone or a personal computer) or by means of unstructured supplementary service data. The prototype was implemented and tested. The users found the technology easy to use. Title An approach for business transaction management Abstract Business Transactions can be seen as a hierarchy of tasks, in which execution is orchestrated in order to manage the different interactions among implied services. Business Transactions are generally long running, consisting of sub-transactions that may fail or be cancelled. In addition, the problem also entails concurrent access to data available via Web services. There are many solutions, namely compensation and locking, as present in the "DBMSs" transactional model adapted to Business Process model. The locking restricts access and degrades the Quality of Service. Compensation can be complicated to implement and costly in terms of performance. Each one of these solutions has a cost. In this paper we propose a cost model for strategies based on locking and on compensation to compare them in the execution plan of a Business Transaction. This comparison allows us to choose the least expensive strategy. Title Privacy-preserving linear programming Abstract With the rapid increase in computing, storage and networking resources, data is not only collected and stored, but also analyzed. This creates a serious privacy problem which often inhibits the use of this data. In this paper, we focus on the problem of linear programming, which is the most important sub-class of optimization problems. We consider the case where the objective function and the constraints are partitioned between two parties with one party holding the objective while the other holds the constraints. We propose a very efficient and secure transformation based solution that has the significant added benefit of being independent of the specific linear programming algorithm used. NA 6 Citations Title A prototype design for DRM based credit card transaction in E-commerce Abstract In E-Commerce credit cards gained popularity as a sophisticated payment mechanism. With the increase in credit card use on web, credit card fraud has gone up dramatically. Which cause customer's inconvenience and for merchant, loss of customers. To combat credit card fraud and to regain the customer's trust an attempt is made here to design a trust based payment system, in which the customer does not need to disclose his/her credit card number during the transaction, and hence they can feel safe. In this newly proposed system on behalf of the customer the bank or the issuer of the credit card is involved to perform the transaction. This is basically done by generating a single use 'token' by the bank which includes information about the customer, merchant, product, payment amount, date of issue and date of expiry etc. and thereafter wrapped as a DRM package. Among various advantages, one is that only the intended user and the specified application software can open the DRM package using special key. The application, thereafter, will take care of the rights imposed on the 'token' and expires itself after the single use. We have tried an attempt to use UML to design the model of such system, which is the recent trend of software engineering practice. Title Managing virtual money for satisfaction and scale up in P2P systems Abstract In peer-to-peer data management systems query allocation is a critical issue for the good operation of the system. This task is challenging because participants may prefer to perform some queries than others. Microeconomic mechanisms aim at dealing with this, but, to the best of our knowledge, none of them has ever proposed experimental validations that, beyond query load or response time, use measures that are outside the microeconomic scope. The contribution of this paper is twofold. We present a virtual money-based query allocation process that is suitable for large-scale super peer systems. We compare a non microeconomic mediation with micro-economic ones from a satisfaction point of view. The experimental results show that the providers' invoice phase is as much important as the providers' selection phase for a virtual money-based mediation. Title First price sealed bid auction without auctioneers Abstract We propose two protocol variants for a first price sealed-bid auction, without using intermediatory auctioneers. One version achieves full privacy for the bidders and their bids, the other provides a form of verifiability, at the cost of some privacy. Full privacy protects all bids. In particular the winner's identity and price are only known by the seller. Lesser privacy allows the winner to be known and verified publicly. Both versions provide non-repudiation. We demonstrate correctness and show how computational and communication costs vary with different privacy levels. Title Practical secrecy-preserving, verifiably correct and trustworthy auctions Abstract Title Emergence of service-added model in B2C for small-sized companies Abstract Title Richard Field on Technology and Commerce Abstract Title Secure distributed human computation Abstract CCS Applied computing Electronic commerce Electronic data interchange Title e-procurement for increasing business process agility Abstract Today the business is changing rapidly over the period. Every enterprise need to develop new service offerings and new technologies has to be adopted or reconfigured. Most of the service companies are tied with traditional project techniques, which include a staged approach. These stages need to be compressed and changed to meet time-to-market demands. Today every enterprise must be agile enough to respond with changing requirements of their customers. Agility has becomes the basic key attribute today as business faces uncertain and volatile environments. E-Procurement makes it possible to automate buying and selling over the internet. Typically an e-Procurement-enabled website will have product comparisons across vendors and various processes like tendering, auctioning, vendor management, and catalogue and contract management. High-end e-procurement solutions allow organizations to define their own processes in the form of workflows - thus utilizing concepts of business process modeling. In this paper we present the findings from a recent survey on e-procurement in India and explain how an e-procurement can be used in such fast growing organization to speed up the business activity at the suitable agility level and its impact on centralization and firm's efficiency in the procurement process. Title A novel simple secure internet voting protocol Abstract A wide number of Internet voting systems have been implemented over the years for voluntary and mandatory purposes with mixed results. In 2002, Wu and Sankaranarayana [] proposed a simple protocol for Internet voting. However, their protocol does not satisfy all properties of an ideal Internet voting protocol. In this paper, we propose an improved Internet voting protocol like anonymity, third party verification and avoidance of double voting. To make it more secure and computationally faster, the proposed protocol has been developed using elliptic curve cryptosystem. Title The influence of the buyer-seller relationship on e-commerce pressures: the case of the primary metal industry Abstract Title Adding semantics to rosettaNet specifications Abstract Title Security, anonymity and trust in electronic auctions Abstract Title E-services: a look behind the curtain Abstract Title A practical approach to solve Secure Multi-party Computation problems Abstract Title Interoperable strategies in automated trust negotiation Abstract Title A Chinese wall security model for decentralized workflow systems Abstract Title Managing trust in a peer-2-peer information system Abstract CCS Applied computing Electronic commerce Electronic funds transfer Title The role of banks in the mobile payment ecosystem: a strategic asset perspective Abstract Markets in developed countries have witnessed the launch of a number of mobile payment initiatives over the last years. Even though the emergence of mobile payments may still hold high promises, most of these initiatives have seen stagnation or failure. Traditionally, dominant firms from various industries had to negotiate the exchange of their complementary resources and capabilities in order to provide a mobile payment platform. Indeed, significant efforts have been made to design a satisfying business model to enhance this essential collaboration. However, the struggle for these inter-dependent firms to form coalitions just hindered the emergence of successful mobile payment platforms. As firms have difficulties to self-orchestrate their efforts to shape sustainable ecosystems, different industry architectures solving the inter-dependency issue remain to be investigated. In certain architectures, the importance of the banks' role has been questioned. This paper takes a resource-based view on banks to explore how resources/capabilities bestow upon banks a competitive advantage in the mobile payment ecosystem. The analysis leads to the identification of strategic assets, owing to which it can be argued that the banks still have an essential role to play in the industry architecture. Title Designing digital payment artifacts Abstract Ubiquitous and pervasive computing is fundamentally transforming product categories such as music, movies, and books and the associated practices of product searching, ordering, and buying. This paper contributes to theory and practice of digital payments by conducting a design science inquiry into the mobile phone wallet (m-wallet). Four different user groups, including young teenagers, young adults, mothers and businessmen, have been involved in the process of identifying, developing and evaluating functional and design properties of m-wallets. Interviews and formative usability evaluations provided data for the construction of a conceptual model in the form of sketches followed by a functional model in the form of low-fidelity mock-ups. During the design phases, knowledge was gained on what properties the users would like the m-wallet to embody. The identified properties have been clustered as 'Functional properties' and 'Design properties', which are theoretical contributions to the on-going research on m-wallets. One of the findings from our design science inquiry into m-wallets is that everyday life contexts require that evaluation criteria have to be expanded beyond "functionality, completeness, consistency, accuracy, performance, reliability, usability, fit with the organization, and other relevant quality attributes" [12] that are used within current design science work. Title On the convergence and robustness of reserve pricing in keyword auctions Abstract Reserve price becomes a critical issue in mechanism design of keyword auctions mostly because of the potential revenue increase brought up by it. In this paper, we focus on a sub-problem in reserve pricing, that is, how to estimate the bids distribution from the truncated samples and further calculate the optimal reserve price in an iterative setting. To the best of our knowledge, this is the first paper to discuss this problem. We propose to use maximum likelihood estimate (MLE) to solve the problem, and we prove that it is an unbiased method for distribution estimation. Moreover, we further simulate the iterative optimal reserve price calculating and updating process based on the estimated distribution. The experimental results are interpreted in terms of the robustness of MLE to truncated sample size and initial reserve price (truncated value), and the convergence of subsequent optimal reserve price in the iterative updating process is also discussed. We conclude that MLE is reliable enough to be applied in real-world optimal reserve pricing in keyword auctions. Title Competition and collaboration shaping the digital payment infrastructure Abstract Digital artifacts take increasingly prominent positions in the life of individuals, organizations and the society at large. This paper inquires into the effects of digitalization on the payment industry. In the case of payments, the ecosystem surrounding a payment historically involved two parties exchanging goods and services for money (banknotes and coins). Today, payment increasingly consists of digital representations of money in a globally intertwined system that involve many parties, such as payers, payment services providers, banks, telecom operators, mobile phone manufactures, and payees. We study how technological payment innovations influence the payment ecosystem, and find that digitalization has caused ecosystem turbulence by influencing competitive and collaborative dimensions of the ecosystem. The digitalization creates a new arena for competition that will require new collaboration forms among involved stockholders. In the extension, we find that future developments of the digital payment infrastructure are something very different from a traditional IT systems development project, which makes existing methods and approaches to systems development inadequate in addressing the challenge. Title The mobile phone as a link to formal financial services: findings from Uganda Abstract Mobile Banking has been touted as revolutionary in the developing world with its capacity to extend financial services access to the unbanked. However, the scope of the financial services offered on the mobile backbone has been at best optimistic and under-developed, spanning typically microtransfer, micropayment and remittance services. As the literature continues to exhort the benefits of long-term, reliable and easy access to formal financial services (especially savings and loan instruments) in combating poverty, the Mobile Banking landscape finds itself in a state of flux. Innovative ventures are being tested around the globe to develop Mobile Banking services to include savings accrual, loan approval and insurance facilities; however whether or not the infrastructure is able to accommodate more inclusive financial services and products is certainly the question of the hour. This paper will present the findings from a three month pilot that was conducted in Uganda to test a Mobile Banking solution that targeted the dissemination of formal financial services, especially savings facilities, to unserved populations by re-appropriating an existing technological platform (mobile phones) and leveraging a non-traditional service provider (bank on wheels). To this end, a preliminary prototype was launched at eight different sites to test its viability. The inception design was constantly monitored and subsequently redesigned. In this manner, the design activity becomes the pivot of the study. The end goal of the pilot was to present a final transformational design to the project partners for consideration for a full-scale, commercial launch. Title Privacy-preserving smart metering Abstract Smart grid proposals threaten user privacy by potentially disclosing fine-grained consumption data to utility providers, primarily for time-of-use billing, but also for profiling, settlement, forecasting, tariff and energy efficiency advice. We propose a privacy-preserving protocol for general calculations on fine-grained meter readings, while keeping the use of tamper evident meters to a strict minimum. We allow users to perform and prove the correctness of computations based on readings on their own devices, without disclosing any fine grained consumption. Applying the protocols to time-of-use billing is particularly simple and efficient, but we also support a wider variety of tariff policies. Cryptographic proofs and multiple implementations are used to show the proposed protocols are secure and efficient. Title Smartcard-based micro-billing scheme to activate the market for user-generated content Abstract User-generated content (UGC) is one of the most promising near-term services. Most UGC are currently being distributed for free due to the lack of a suitable compensation system. This inability to compensate the creators stands in the way of UGC becoming a mature service with sustainable and sound growth. The current schemes used to charge for (professional) digital content are clearly inadequate for UGC, since UGC is significantly different from professional content, and users will hesitate to pay for the overhead costs imposed by professional content distribution channels. This may force the UGC market into becoming a Title BulaPay: a web-service based third-party payment system for e-commerce in South Pacific Islands Abstract Third-party payment systems have potential to provide high-volume and low-cost goods, and services for a wide variety of web-based applications. We propose a new model, BulaPay, a third party payment protocol characterized by off-line processing, suitable for charging goods and services. Third-party payment systems must provide a secure, highly efficient, flexible, usable and reliable environment, the key issues in third-party payment systems development. Therefore, in order to assist in the design of a third party payment model suitable for web-based application in the pacific region, we compare and contrast two popular third-party payment models in this paper and outline a new scheme - BulaPay we are developing that addresses the disadvantages in current schemes. Title Micropayment schemes with ability to return changes Abstract Many secure micropayment schemes have been proposed as the desire to support the low-value and the high-volume purchases of some e-commerce applications such as mobile commerce services or web-based interactive video services. However it seems that no one studies how to add the ability of returning changes in micropayment schemes. In this paper, we take the lead in studying the micropayment schemes with ability to return changes (MSRC), which reduce the hash operations in transaction phase. When compared with the previous micropayment schemes, the proposed MSRC have the low computation costs and thus is more suitable and practical for mobile commerce environments, where have the limited computation capability and the limited bandwidth. Title Portal-netpay micro-payment system for non-micro-payment vendors Abstract Micro-payment systems have the potential to provide non-intrusive, high-volume and low-cost pay-as-you-use services for a wide variety of web-based applications. NetPay is one such micro-payment protocol. There is however not currently a way for some vendors who only want to use NetPay facilities temporarily. We propose an extension, Portal-NetPay micro-payment system where a portal or vendor acts as a purchasing portal to non-NetPay supporting vendors by redirecting page accesses to these vendors and charging the customers e-coins in the process. We describe the motivation for Portal-NetPay as well as four transactions of the Portal-NetPay protocol in detail to illustrate the approach. We then discuss future research on this protocol. CCS Applied computing Electronic commerce Online shopping CCS Applied computing Electronic commerce Online banking Title The role of banks in the mobile payment ecosystem: a strategic asset perspective Abstract Markets in developed countries have witnessed the launch of a number of mobile payment initiatives over the last years. Even though the emergence of mobile payments may still hold high promises, most of these initiatives have seen stagnation or failure. Traditionally, dominant firms from various industries had to negotiate the exchange of their complementary resources and capabilities in order to provide a mobile payment platform. Indeed, significant efforts have been made to design a satisfying business model to enhance this essential collaboration. However, the struggle for these inter-dependent firms to form coalitions just hindered the emergence of successful mobile payment platforms. As firms have difficulties to self-orchestrate their efforts to shape sustainable ecosystems, different industry architectures solving the inter-dependency issue remain to be investigated. In certain architectures, the importance of the banks' role has been questioned. This paper takes a resource-based view on banks to explore how resources/capabilities bestow upon banks a competitive advantage in the mobile payment ecosystem. The analysis leads to the identification of strategic assets, owing to which it can be argued that the banks still have an essential role to play in the industry architecture. Title Designing digital payment artifacts Abstract Ubiquitous and pervasive computing is fundamentally transforming product categories such as music, movies, and books and the associated practices of product searching, ordering, and buying. This paper contributes to theory and practice of digital payments by conducting a design science inquiry into the mobile phone wallet (m-wallet). Four different user groups, including young teenagers, young adults, mothers and businessmen, have been involved in the process of identifying, developing and evaluating functional and design properties of m-wallets. Interviews and formative usability evaluations provided data for the construction of a conceptual model in the form of sketches followed by a functional model in the form of low-fidelity mock-ups. During the design phases, knowledge was gained on what properties the users would like the m-wallet to embody. The identified properties have been clustered as 'Functional properties' and 'Design properties', which are theoretical contributions to the on-going research on m-wallets. One of the findings from our design science inquiry into m-wallets is that everyday life contexts require that evaluation criteria have to be expanded beyond "functionality, completeness, consistency, accuracy, performance, reliability, usability, fit with the organization, and other relevant quality attributes" [12] that are used within current design science work. Title On the convergence and robustness of reserve pricing in keyword auctions Abstract Reserve price becomes a critical issue in mechanism design of keyword auctions mostly because of the potential revenue increase brought up by it. In this paper, we focus on a sub-problem in reserve pricing, that is, how to estimate the bids distribution from the truncated samples and further calculate the optimal reserve price in an iterative setting. To the best of our knowledge, this is the first paper to discuss this problem. We propose to use maximum likelihood estimate (MLE) to solve the problem, and we prove that it is an unbiased method for distribution estimation. Moreover, we further simulate the iterative optimal reserve price calculating and updating process based on the estimated distribution. The experimental results are interpreted in terms of the robustness of MLE to truncated sample size and initial reserve price (truncated value), and the convergence of subsequent optimal reserve price in the iterative updating process is also discussed. We conclude that MLE is reliable enough to be applied in real-world optimal reserve pricing in keyword auctions. Title Competition and collaboration shaping the digital payment infrastructure Abstract Digital artifacts take increasingly prominent positions in the life of individuals, organizations and the society at large. This paper inquires into the effects of digitalization on the payment industry. In the case of payments, the ecosystem surrounding a payment historically involved two parties exchanging goods and services for money (banknotes and coins). Today, payment increasingly consists of digital representations of money in a globally intertwined system that involve many parties, such as payers, payment services providers, banks, telecom operators, mobile phone manufactures, and payees. We study how technological payment innovations influence the payment ecosystem, and find that digitalization has caused ecosystem turbulence by influencing competitive and collaborative dimensions of the ecosystem. The digitalization creates a new arena for competition that will require new collaboration forms among involved stockholders. In the extension, we find that future developments of the digital payment infrastructure are something very different from a traditional IT systems development project, which makes existing methods and approaches to systems development inadequate in addressing the challenge. Title The mobile phone as a link to formal financial services: findings from Uganda Abstract Mobile Banking has been touted as revolutionary in the developing world with its capacity to extend financial services access to the unbanked. However, the scope of the financial services offered on the mobile backbone has been at best optimistic and under-developed, spanning typically microtransfer, micropayment and remittance services. As the literature continues to exhort the benefits of long-term, reliable and easy access to formal financial services (especially savings and loan instruments) in combating poverty, the Mobile Banking landscape finds itself in a state of flux. Innovative ventures are being tested around the globe to develop Mobile Banking services to include savings accrual, loan approval and insurance facilities; however whether or not the infrastructure is able to accommodate more inclusive financial services and products is certainly the question of the hour. This paper will present the findings from a three month pilot that was conducted in Uganda to test a Mobile Banking solution that targeted the dissemination of formal financial services, especially savings facilities, to unserved populations by re-appropriating an existing technological platform (mobile phones) and leveraging a non-traditional service provider (bank on wheels). To this end, a preliminary prototype was launched at eight different sites to test its viability. The inception design was constantly monitored and subsequently redesigned. In this manner, the design activity becomes the pivot of the study. The end goal of the pilot was to present a final transformational design to the project partners for consideration for a full-scale, commercial launch. Title Privacy-preserving smart metering Abstract Smart grid proposals threaten user privacy by potentially disclosing fine-grained consumption data to utility providers, primarily for time-of-use billing, but also for profiling, settlement, forecasting, tariff and energy efficiency advice. We propose a privacy-preserving protocol for general calculations on fine-grained meter readings, while keeping the use of tamper evident meters to a strict minimum. We allow users to perform and prove the correctness of computations based on readings on their own devices, without disclosing any fine grained consumption. Applying the protocols to time-of-use billing is particularly simple and efficient, but we also support a wider variety of tariff policies. Cryptographic proofs and multiple implementations are used to show the proposed protocols are secure and efficient. Title Smartcard-based micro-billing scheme to activate the market for user-generated content Abstract User-generated content (UGC) is one of the most promising near-term services. Most UGC are currently being distributed for free due to the lack of a suitable compensation system. This inability to compensate the creators stands in the way of UGC becoming a mature service with sustainable and sound growth. The current schemes used to charge for (professional) digital content are clearly inadequate for UGC, since UGC is significantly different from professional content, and users will hesitate to pay for the overhead costs imposed by professional content distribution channels. This may force the UGC market into becoming a Title BulaPay: a web-service based third-party payment system for e-commerce in South Pacific Islands Abstract Third-party payment systems have potential to provide high-volume and low-cost goods, and services for a wide variety of web-based applications. We propose a new model, BulaPay, a third party payment protocol characterized by off-line processing, suitable for charging goods and services. Third-party payment systems must provide a secure, highly efficient, flexible, usable and reliable environment, the key issues in third-party payment systems development. Therefore, in order to assist in the design of a third party payment model suitable for web-based application in the pacific region, we compare and contrast two popular third-party payment models in this paper and outline a new scheme - BulaPay we are developing that addresses the disadvantages in current schemes. Title Micropayment schemes with ability to return changes Abstract Many secure micropayment schemes have been proposed as the desire to support the low-value and the high-volume purchases of some e-commerce applications such as mobile commerce services or web-based interactive video services. However it seems that no one studies how to add the ability of returning changes in micropayment schemes. In this paper, we take the lead in studying the micropayment schemes with ability to return changes (MSRC), which reduce the hash operations in transaction phase. When compared with the previous micropayment schemes, the proposed MSRC have the low computation costs and thus is more suitable and practical for mobile commerce environments, where have the limited computation capability and the limited bandwidth. Title Portal-netpay micro-payment system for non-micro-payment vendors Abstract Micro-payment systems have the potential to provide non-intrusive, high-volume and low-cost pay-as-you-use services for a wide variety of web-based applications. NetPay is one such micro-payment protocol. There is however not currently a way for some vendors who only want to use NetPay facilities temporarily. We propose an extension, Portal-NetPay micro-payment system where a portal or vendor acts as a purchasing portal to non-NetPay supporting vendors by redirecting page accesses to these vendors and charging the customers e-coins in the process. We describe the motivation for Portal-NetPay as well as four transactions of the Portal-NetPay protocol in detail to illustrate the approach. We then discuss future research on this protocol. CCS Applied computing Electronic commerce Secure online transactions Title The role of banks in the mobile payment ecosystem: a strategic asset perspective Abstract Markets in developed countries have witnessed the launch of a number of mobile payment initiatives over the last years. Even though the emergence of mobile payments may still hold high promises, most of these initiatives have seen stagnation or failure. Traditionally, dominant firms from various industries had to negotiate the exchange of their complementary resources and capabilities in order to provide a mobile payment platform. Indeed, significant efforts have been made to design a satisfying business model to enhance this essential collaboration. However, the struggle for these inter-dependent firms to form coalitions just hindered the emergence of successful mobile payment platforms. As firms have difficulties to self-orchestrate their efforts to shape sustainable ecosystems, different industry architectures solving the inter-dependency issue remain to be investigated. In certain architectures, the importance of the banks' role has been questioned. This paper takes a resource-based view on banks to explore how resources/capabilities bestow upon banks a competitive advantage in the mobile payment ecosystem. The analysis leads to the identification of strategic assets, owing to which it can be argued that the banks still have an essential role to play in the industry architecture. Title Designing digital payment artifacts Abstract Ubiquitous and pervasive computing is fundamentally transforming product categories such as music, movies, and books and the associated practices of product searching, ordering, and buying. This paper contributes to theory and practice of digital payments by conducting a design science inquiry into the mobile phone wallet (m-wallet). Four different user groups, including young teenagers, young adults, mothers and businessmen, have been involved in the process of identifying, developing and evaluating functional and design properties of m-wallets. Interviews and formative usability evaluations provided data for the construction of a conceptual model in the form of sketches followed by a functional model in the form of low-fidelity mock-ups. During the design phases, knowledge was gained on what properties the users would like the m-wallet to embody. The identified properties have been clustered as 'Functional properties' and 'Design properties', which are theoretical contributions to the on-going research on m-wallets. One of the findings from our design science inquiry into m-wallets is that everyday life contexts require that evaluation criteria have to be expanded beyond "functionality, completeness, consistency, accuracy, performance, reliability, usability, fit with the organization, and other relevant quality attributes" [12] that are used within current design science work. Title On the convergence and robustness of reserve pricing in keyword auctions Abstract Reserve price becomes a critical issue in mechanism design of keyword auctions mostly because of the potential revenue increase brought up by it. In this paper, we focus on a sub-problem in reserve pricing, that is, how to estimate the bids distribution from the truncated samples and further calculate the optimal reserve price in an iterative setting. To the best of our knowledge, this is the first paper to discuss this problem. We propose to use maximum likelihood estimate (MLE) to solve the problem, and we prove that it is an unbiased method for distribution estimation. Moreover, we further simulate the iterative optimal reserve price calculating and updating process based on the estimated distribution. The experimental results are interpreted in terms of the robustness of MLE to truncated sample size and initial reserve price (truncated value), and the convergence of subsequent optimal reserve price in the iterative updating process is also discussed. We conclude that MLE is reliable enough to be applied in real-world optimal reserve pricing in keyword auctions. Title Competition and collaboration shaping the digital payment infrastructure Abstract Digital artifacts take increasingly prominent positions in the life of individuals, organizations and the society at large. This paper inquires into the effects of digitalization on the payment industry. In the case of payments, the ecosystem surrounding a payment historically involved two parties exchanging goods and services for money (banknotes and coins). Today, payment increasingly consists of digital representations of money in a globally intertwined system that involve many parties, such as payers, payment services providers, banks, telecom operators, mobile phone manufactures, and payees. We study how technological payment innovations influence the payment ecosystem, and find that digitalization has caused ecosystem turbulence by influencing competitive and collaborative dimensions of the ecosystem. The digitalization creates a new arena for competition that will require new collaboration forms among involved stockholders. In the extension, we find that future developments of the digital payment infrastructure are something very different from a traditional IT systems development project, which makes existing methods and approaches to systems development inadequate in addressing the challenge. Title DRAP: a Robust Authentication protocol to ensure survivability of computational RFID networks Abstract The Wireless Identification and Sensing Platform (WISP) from Intel Research Seattle is an instance of Computational RFID (CRFID). Since WISP tags contain sensor data along with their Title The mobile phone as a link to formal financial services: findings from Uganda Abstract Mobile Banking has been touted as revolutionary in the developing world with its capacity to extend financial services access to the unbanked. However, the scope of the financial services offered on the mobile backbone has been at best optimistic and under-developed, spanning typically microtransfer, micropayment and remittance services. As the literature continues to exhort the benefits of long-term, reliable and easy access to formal financial services (especially savings and loan instruments) in combating poverty, the Mobile Banking landscape finds itself in a state of flux. Innovative ventures are being tested around the globe to develop Mobile Banking services to include savings accrual, loan approval and insurance facilities; however whether or not the infrastructure is able to accommodate more inclusive financial services and products is certainly the question of the hour. This paper will present the findings from a three month pilot that was conducted in Uganda to test a Mobile Banking solution that targeted the dissemination of formal financial services, especially savings facilities, to unserved populations by re-appropriating an existing technological platform (mobile phones) and leveraging a non-traditional service provider (bank on wheels). To this end, a preliminary prototype was launched at eight different sites to test its viability. The inception design was constantly monitored and subsequently redesigned. In this manner, the design activity becomes the pivot of the study. The end goal of the pilot was to present a final transformational design to the project partners for consideration for a full-scale, commercial launch. Title Untraceable, anonymous and fair micropayment scheme Abstract The development of new applications of electronic commerce (e-commerce) that require the payment of small amounts of money to purchase services or goods opens new challenges in the security and privacy fields. This kind of payments are called micropayments and they have to provide a tradeoff between efficiency and security requirements to pay low-value items. In this paper we present a new efficient and secure micropayment scheme which fulfils the security properties that guarantee no financial risk for merchants and the privacy of the customers. In addition, the proposed system defines a fair exchange between the micropayment and the desired good or service. In this fair exchange, the anonymity and untraceability of the customers are assured. Finally, customers can request a refund whether they are no more interested on the services offered by merchants. Title On the security and practicality of a buyer seller watermarking protocol for DRM Abstract A buyer seller watermarking (BSW) protocol allows a seller of digital content to prove to a third party that a buyer illegally distributed copies of content when these copies are found. It also protects an honest buyer from being falsely accused of such an act by the seller. We examine the security and practicality of a recent BSW protocol for Digital Rights Management (BSW-DRM) proposed in SIN 2009. We show that the protocol contains weaknesses, which may result in successful replay, modification and content piracy. Furthermore, the heavy reliance on the fully trusted Certificate Authority has its security concern and it is also less practical to be applied in current digital content distribution systems. We further suggest possible improvements based on the many protocols proposed prior to this protocol. Title Echo hiding based stereo audio watermarking against pitch-scaling attacks Abstract In audio watermarking, the robustness against pitch-scaling attack, is one of the most challenging problems. In this paper, we propose an algorithm, based on traditional time-spread(TS) echo hiding based audio watermarking to solve this problem. In TS echo hiding based watermarking, pitch-scaling attack shifts the location of pseudonoise (PN) sequence which appears in the cepstrum domain. Thus, position of the peak, which occurs after correlating with PN-sequence changes by an un-known amount and that causes the error. In the proposed scheme, we replace PN-sequence with unit-sample sequence and modify the decoding algorithm in such a way it will not depend on a particular point in cepstrum domain for extraction of watermark. Moreover proposed algorithm is applied to stereo audio signals to further improve the robustness. Experimental results illustrate the effectiveness of the proposed algorithm against pitch-scaling attacks compared to existing methods. In addition to that proposed algorithm also gives better robustness against other conventional signal processing attacks. Title Understanding fraudulent activities in online ad exchanges Abstract Online advertisements (ads) provide a powerful mechanism for advertisers to effectively target Web users. Ads can be customized based on a user's browsing behavior, geographic location, and personal interests. There is currently a multi-billion dollar market for online advertising, which generates the primary revenue for some of the most popular websites on the Internet. In order to meet the immense market demand, and to manage the complex relationships between advertisers and publishers (i.e., the websites hosting the ads), marketplaces known as "ad exchanges" are employed. These exchanges allow publishers (sellers of ad space) and advertisers(buyers of this ad space) to dynamically broker traffic through ad networks to efficiently maximize profits for all parties. Unfortunately, the complexities of these systems invite a considerable amount of abuse from cybercriminals, who profit at the expense of the advertisers. In this paper, we present a detailed view of how one of the largest ad exchanges operates and the associated security issues from the vantage point of a member ad network. More specifically, we analyzed a dataset containing transactions for ingress and egress ad traffic from this ad network. In addition, we examined information collected from a command-and-control server used to operate a botnet that is leveraged to perpetrate ad fraud against the same ad exchange. CCS Applied computing Electronic commerce Online auctions CCS Applied computing Enterprise computing Enterprise information systems CCS Applied computing Enterprise computing Business process management CCS Applied computing Enterprise computing Enterprise architectures CCS Applied computing Enterprise computing Service-oriented architectures CCS Applied computing Enterprise computing Event-driven architectures CCS Applied computing Enterprise computing Business rules CCS Applied computing Enterprise computing Enterprise modeling CCS Applied computing Enterprise computing Enterprise ontologies, taxonomies and vocabularies CCS Applied computing Enterprise computing Enterprise data management CCS Applied computing Enterprise computing Reference models CCS Applied computing Enterprise computing Business-IT alignment CCS Applied computing Enterprise computing IT architectures CCS Applied computing Enterprise computing IT governance CCS Applied computing Enterprise computing Enterprise computing infrastructures CCS Applied computing Enterprise computing Enterprise interoperability CCS Applied computing Physical sciences and engineering Aerospace CCS Applied computing Physical sciences and engineering Archaeology Title Spherical photogrammetry for cultural heritage—San Galgano Abbey and the Roman Theater, Sabratha Abstract In this article, we present the results of the photogrammetric surveys of two important monuments, the Roman Theatre in Sabratha, Libya, and San Galgano Abbey, in Italy. The surveys were performed with a new photogrammetric technique, Spherical Photogrammetry, developed by Gabriele Fangi [2007, 2008, 2009, 2010]. The method is based on so-called spherical panoramas. These are obtained by stitching together several pictures taken from the same point and covering 360°, which are then mapped in a plane with an equi-rectangular projection. This technique is normally used to produce QuickTime movies which have already proven to be very useful for the documentation of cultural heritage. One panorama can replace many normal photographic images. Ease, rapidity, low cost, and completeness of the documentation are the main advantages of this technique. The Abbey of San Galgano is an important example of Gothic architecture in Italy. The church is empty and without its roof, which fell towards the end of the 18 Title 3D visualization of archaeological uncertainty Abstract By uncertainty, we define an archaeological expert's level of confidence in an interpretation deriving from gathered evidence. Archaeologists and computer scientists have urged caution in the use of 3D for archaeological reconstructions because the availability of other possible hypotheses is not always being acknowledged. This poster presents a 3D visualization system of archaeological uncertainty. Title Cultural heritage and digital experience design: presentation, adaptation and competitive evolution Abstract Title Presenting a monument in restoration: the Saint Laurentius church in Ename and its role in the Francia Media heritage initiative Abstract Title Modeling and visualizing the cultural heritage data set of Graz Abstract Title Web based 3D VRML record of a historic collection Abstract Title Managing and organizing archaeological data sets with an XML native database Abstract Title Meeting the spirit of history Abstract Title Copyright protection and management and a web based library for digital images of the Hellenic cultural heritage Abstract Title Wag the dog?: archaeology, reality and virtual reality in a virtual country Abstract CCS Applied computing Physical sciences and engineering Astronomy Title Fast algorithms for comprehensive n-point correlation estimates Abstract The We present the first comprehensive approach to the entire Title DOME: towards the ASTRON & IBM center for exascale technology Abstract The computational and storage demands for the future Square Kilometer Array (SKA) radio telescope are significant. Building on the experience gained with the collaboration between ASTRON and IBM with the Blue Gene based LOFAR correlator, ASTRON and IBM have now embarked on a public-private exascale computing research project aimed at solving the SKA computing challenges. This project, called DOME, investigates novel approaches to exascale computing, with a focus on energy efficient, streaming data processing, exascale storage, and nano-photonics. DOME will not only benefit the SKA, but will also make the knowledge gained available to interested third parties via a Users Platform. The intention of the DOME project is to evolve into the global center of excellence for transporting, processing, storing and analyzing large amounts of data for minimal energy cost: the Title ExaScale high performance computing in the square kilometer array Abstract Next generation radio telescopes will require tremendous amounts of compute power. With the current state of the art, the Square Kilometer Array (SKA), currently entering its pre-construction phase, will require in excess of one ExaFlop/s in order to process and reduce the massive amount of data generated by the sensors. The nature of the processing involved means that conventional high performance computing (HPC) platforms are not ideally suited. Consequently, the SKA project requires active and intensive involvement from both the high performance computing research community, as well as industry, in order to make sure a suitable system is available when the telescope is built. In this paper we present a first analysis of the processing required, and a tool that will facilitate future analysis and external involvement. Title Parallel gravity: from embarrassingly parallel to hierarchical Abstract In this talk I will describe how we use Graphics Processing Units (GPU) to solve the gravitational force equations as described by Sir Isaac Newton in 1687. The talk will start with the embarrassingly parallel direct N-body methods, which are ideal for parallel architectures, like GPUs, to the more complex methods as used in high precision production quality astrophysical simulations. From the trivial direct N-body methods we continue with the more complex hierarchical N-body methods. We present the implementation of a tree-code that is executed fully on the GPU. In this tree-code not only the gravitational force equations are executed on the GPU, but also less obvious methods such as the construction of the hierarchical data-structure. For each of the presented GPU codes we show results of actual production simulations, for example the merging and interaction of galaxies and the merging of super massive black-holes. Title Exact and approximate computation of a histogram of pairwise distances between astronomical objects Abstract We compare several alternative approaches to computing correlation functions, which is a cosmological application for analyzing the distribution of matter in the universe. This computation involves counting the pairs of galaxies within a given distance from each other and building a histogram that shows the dependency of the number of pairs on the distance. The straightforward algorithm for counting the exact number of pairs has the Title Tessellation analysis of the cosmic web Abstract The large scale distribution of matter and galaxies features a complex network of interconnected filamentary galaxy associations. This network, which has become known as the The overwhelming complexity of both the individual structures as well as their connectivity, the lack of structural symmetries, the intrinsic multiscale nature and the wide range of densities that one finds in the cosmic matter distribution has prevented the use of simple and straightforward instruments. In this lecture, I describe the considerable advances that have been made over the past decade towards unravelling the structure of the Cosmic Web, enabled by a range of tools and concepts from computational geometry and computational topology. This will include our own work, in which Voronoi and Delaunay tessellations figure prominently through their high sensitivity to density and local shape of the local galaxy distribution, or particle distribution in the case of computer simulations of cosmic structure formation. It has led to the development of the Delaunay Tessellation Field Estimator (DTFE) formalism, which forms the basis of a range of techniques to identify different aspects of the Cosmic Web [weyschaap2009]. Examples are the Watershed Void Finder to trace voids, the Nexus multiscale morphology formalism and the Morse-based SpineWeb formalism to find walls, filaments and clusters. Recently, we used alpha shapes to study the multiscale topology of the Cosmic Web, in terms of Betti numbers and persistence diagrams. I will also review a number of other astronomical applications of tesssellations, motivated by their quickly proliferaating use in astrophysics and cosmology. Title The sticky geometry of the cosmic web Abstract In this video we highlight the application of Computational Geometry to our understanding of the formation and dynamics of the Cosmic Web. The emergence of this intricate and pervasive weblike structure of the Universe on Megaparsec scales can be approximated by a well-known equation from fluid mechanics, the Burgers' equation. The solution to this equation can be obtained from a geometrical formalism. We have extended and improved this method by invoking weighted Delaunay and Voronoi tessellations. The duality between these tessellations finds a remarkable and profound reflection in the description of physical systems in Eulerian and Lagrangian terms. The resulting Adhesion formalism provides deep insight into the dynamics and topology of the Cosmic Web. It uncovers a direct connection between the conditions in the very early Universe and the complex spatial patterns that emerged out of these under the influence of gravity. Title Subset removal on massive data with Dash Abstract Ongoing efforts by the Large Synoptic Survey Telescope (LSST) involve the study of asteroid search algorithms and their performance on both real and simulated data. Images of the night sky reveal large numbers of events caused by the reflection of sunlight from asteroids. Detections from consecutive nights can then be grouped together into tracks that potentially represent small portions of the asteroids' sky-plane motion. The analysis of these tracks is extremely time consuming and there is strong interest in the development of techniques that can eliminate unnecessary tracks, thereby rendering the problem more manageable. One such approach is to collectively examine sets of tracks and discard those that are subsets of others. Our implementation of a subset removal algorithm has proven to be fast and accurate on modest sized collections of tracks, but unfortunately has extremely large memory requirements for realistic data sets and cannot effectively use conventional high performance computing resources. We report our experience running the subset removal algorithm on the TeraGrid Appro Dash system, which uses the vSMP software developed by ScaleMP to aggregate memory from across multiple compute nodes to provide access to a large, logical shared memory space. Our results show that Dash is ideally suited for this algorithm and has performance comparable to or superior to that obtained on specialized, heavily demanded, large-memory systems such as the SGI Altix UV. Title Formal first integrals along solutions of differential systems I Abstract We consider an analytic vector field Title Seismicity of the moon: application of the spectral analysis for search of the latent periodicity of a minute range Abstract In the present work the technique of definition of the latent periodicity is offered. The given technique can be applied to the analysis of any types of time numbers. At the analysis of the time number, received by stations "Apollos" the basic difficulty consists that it is non equidistant. The technique includes: the spectral analysis, construction of the SVAN-DIAGRAM, the resonant diagramme and spectrum histogram the analysis. CCS Applied computing Physical sciences and engineering Chemistry Title Poster: study of protein-ligand binding geometries using a scalable and accurate octree-based algorithm in mapReduce Abstract We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. Load-balancing, fault-tolerance, and scalability in MapReduce allows screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking and crossdocking for a series of HIV protease inhibitors. Our method demonstrates significant improvement over "energy-only" scoring for the accurate identification of native ligand geometries. The advantages of this approach make it attractive for complex applications in real-world drug design efforts. Title Monte Carlo strategies for first-principles simulations of elemental systems Abstract We discuss the application of atomistic Monte Carlo simulation based on electronic structure calculations to elemental systems such as metals and alloys. As in prior work in this area, an approximate "pre-sampling" potential is used to generate large moves with a high probability of acceptance. Even with such a scheme, however, such simulations are extremely expensive and may benefit from algorithmic developments that improve acceptance rates and/or enable additional parallelization. Here we consider several such developments. The first of these is a three-level hybrid algorithm in which two pre-sampling potentials are used. The lowest level is an empirical potential, and the "middle" level uses a low-quality density functional theory. The efficiency of the multistage algorithm is analyzed and an example application is given. Two other schemes for reducing overall run-time are also considered. In the first, the Multiple-try Monte Carlo algorithm, a series of moves are attempted in parallel, with the choice of the next state in the chain made by using all the information gathered. This is found to be a poor choice for simulations of this type. In the second scheme, "tree sampling," multiple trial moves are made in parallel such that if the first is rejected, the second is ready and can be considered immediately. Performance of this scheme is shown to be quite effective under certain reasonable run parameters. Title Transforming molecular biology research through extreme acceleration of AMBER molecular dynamics simulations: sampling for the 99% Abstract This talk will cover recent developments in the acceleration of Molecular Dynamics Simulations using NVIDIA Graphics Processing units with the AMBER software package. In particular it will focus on recent algorithmic improvements aimed at accelerating the rate at which phase space is sampled. A recent success has been the reproduction and extension of key results from the DE Shaw 1 millisecond Anton MD simulation of BPTI (Science, Vol. 330 no. 6002 pp. 341-346) with just 2.5 days of dihedral boosted AMD sampling on a single GPU workstation, (Pierce L, Walker R. C. et al. JCTC, 2012 in review). These results show that with careful algorithm design it is possible to obtain sampling of rare biologically relevant events that occur on the millisecond timescale using just a single $500 GTX580 Graphics Card and a desktop workstation. Additional developments highlighted will include the acceleration of AMBER MD simulations using graphics processing units including Amazon EC2 and Microsoft Azure Cloud based automated ensemble calculations, a new precision model optimized for the upcoming Kepler architecture (Walker R. C. et al, JCP, 2012, in prep) as well as approaches for running large scale multi-dimensional GPU accelerated replica exchange calculations on Keeneland and BlueWaters. Title Extending parallel scalability of LAMMPS and multiscale reactive molecular simulations Abstract Conducting molecular dynamics (MD) simulations involving chemical reactions in large-scale condensed phase systems (liquids, proteins, fuel cells, etc...) is a computationally prohibitive task even though many new The typical parallel scaling bottleneck in both reactive and nonreactive all-atom MD simulations is the accurate treatment of long-range electrostatic interactions. Currently, Ewald-type algorithms rely on three-dimensional Fast Fourier Transform (3D-FFT) calculations. The parallel scaling of these 3D-FFT calculations can be severely degraded at higher processor counts due to necessary MPI all-to-all communication. This poses an even bigger problem in MS-EVB calculations, since the electrostatics, and hence the 3D-FFT, must be evaluated many times during a single time step. Due to the limited scaling of the 3D-FFT in MD simulations, the traditional single-program-multiple-data (SPMD) parallelism model is only able to utilize several hundred CPU cores, even for very large systems. However, with a proper implementation of a multi-program (MP) model, large systems can scale to thousands of CPU cores. This paper will discuss recent efforts in collaboration with XSEDE advanced support to implement the MS-EVB model in the scalable LAMMPS MD code, and to further improve parallel scaling by implementing MP parallelization algorithms in LAMMPS. These algorithms improve parallel scaling in both the standard LAMMPS code and LAMMPS with MS-EVB, thus facilitating the efficient simulation of large-scale condensed phase systems, which include the ability to model chemical reactions. Title Large scale plane wave pseudopotential density functional theory calculations on GPU clusters Abstract In this work, we present our implementation of the density functional theory (DFT) plane wave pseudopotential (PWP) calculations on GPU clusters. This GPU version is developed based on a CPU DFT-PWP code: PEtot, which can calculate up to a thousand atoms on thousands of processors. Our test indicates that the GPU version can have a ~10 times speed-up over the CPU version. A detail analysis of the speed-up and the scaling on the number of CPU/GPU computing units (up to 256) are presented. The success of our speed-up relies on the adoption a hybrid reciprocal-space and band-index parallelization scheme. As far as we know, this is the first GPU DFT-PWP code scalable to large number of CPU/GPU computing units. We also outlined the future work, and what is needed to further increase the computational speed by another factor of 10. Title Affinity limits in B-cell epitope prediction for immunity mediated by antipeptide antibodies Abstract A major goal of B-cell epitope prediction is to support the design of peptide-based immunogens (e.g., vaccines) for eliciting antipeptide antibodies that protect against disease, but these antibodies fail to confer protection and even promote disease if they bind with low affinity. In the present work, the Immune Epitope Database (IEDB) was searched to obtain published thermodynamic and kinetic data on binding interactions of antipeptide antibodies. The data suggest that the affinity of the antibodies for their immunizing peptides appears to be limited in a manner consistent with previously proposed kinetic constraints on affinity maturation in vivo, and that cross-reaction of the antibodies with proteins tends to occur with lower affinity than the corresponding reaction of the antibodies with their immunizing peptides. These observations serve to better inform B-cell epitope prediction, particularly to avoid overestimation of affinity for both active and passive immunization; whereas active immunization is subject to limitations of affinity maturation in vivo and of the capacity to accumulate endogenous antibodies, passive immunization may transcend such limitations, possibly with the aid of artificial affinity-selection processes and of protein engineering. In addition to affinity, intrinsic protein disorder may be a useful supplementary criterion for B-cell epitope prediction where such disorder obviates thermodynamically unfavorable protein structural adjustments in the course of cross-reaction between antipep-tide antibodies and proteins. Title Automated peak alignment for nucleic acid capillary electrophoresis data by dynamic programming Abstract Automated capillary electrophoresis represents a powerful approach for high-throughput analysis of nucleic acid chemical probing experiments. Correcting time variation and measuring similarity of time-series data are major challenges in the automated analysis, however. Here we describe an automated peak alignment algorithm that incorporates a dynamic programming approach to align multiple-peak time series profiles. A new peak similarity function and other algorithmic features make possible rapid peak alignment and highly accurate comparisons of complex time-series datasets. Title Challenges and opportunities in renewable energy and energy efficiency Abstract The National Renewable Energy Laboratory (NREL) in Golden, Colorado is the nation's premier laboratory for renewable energy and energy efficiency research. In this talk we will give a brief overview of NREL and then focus on some of the challenges and opportunities in meeting future global energy challenges. Computational modeling, high performance computing, data management and visual informatics is playing a key role in advancing our fundamental understanding of processes and systems at temporal and spatial scales that evade direct observation and helping meet U.S. goals for energy efficiency and clean energy production. This discussion will include details of new, highly energy efficient buildings and social behaviors impacting energy use, fundamental understanding of plants and proteins leading to lower cost renewable fuels, novel computational chemistry approaches for low cost photovoltaic materials, and computational fluid dynamics challenges in simulating complex behaviors within and between large-scale deployment of wind farms and understanding their potential impacts to local and regional climate. Title Submicron model for illuminated gallium nitride HEMT Abstract Microwave power transistors play key role in today's wireless communication, necessary for virtually all major aspects of human activities from entertainment, business to military. HEMT is widely used due to its high speed and power amplification capabilities. The paper deals with comparison of Shockley's Model and the New Model of HEMT to evaluate its sensitivity to illumination to find its application in optical monolithic microwave integrated circuits(OMMIC). Title B-cell epitope prediction for peptide-based vaccine design: towards a paradigm of biological outcomes Abstract Two major obstacles to the development of B-cell epitope prediction for peptide-based vaccine design are (1) the prevailing binary classification paradigm, which mandates the problematic dichotomization of continuous outcome variables, and (2) failure to explicitly model biological consequences of immunization that are relevant to practical considerations of safety and efficacy. The first obstacle is eliminated by redefining the predictive task as quantitative estimation of empirically observable biological effects of antibody-antigen binding, such that prediction is benchmarked using measures of correlation between continuous rather than dichotomous variables; but this alternative approach by itself fails to address the second obstacle even if benchmark data are selected to exclusively reflect functionally relevant cross-reactivity of antipeptide antibodies with protein antigens (as evidenced by antibody-modulated protein biological activity), particularly where only antibody-antigen binding is actually predicted as a surrogate for its biological effects. To overcome the second obstacle, the prerequisite is deliberate effort to predict, a priori, biological outcomes that are of immediate practical significance from the perspective of vaccination. This demands a much broader and deeper systems view of immunobiology than has hitherto been invoked for B-cell epitope prediction. Such a view would facilitate comprehension of many crucial yet largely neglected aspects of the vaccine-design problem. Of these, immunodominance among B-cell epitopes is a central unifying theme that subsumes immune phenomena of tolerance, imprinting and refocusing; but it is meaningful for vaccine design only in the light of disease-specific pathophysiology, which for infectious processes is complicated by host-pathogen coevolution. CCS Applied computing Physical sciences and engineering Earth and atmospheric sciences CCS Applied computing Physical sciences and engineering Engineering CCS Applied computing Physical sciences and engineering Physics Title Poster: Passing the three trillion particle limit with an error-controlled fast multipole method Abstract We present an error-controlled, highly scalable FMM implementation for long-range interactions of particle systems with open, 1D, 2D and 3D periodic boundary conditions. We highlight three aspects of fast summation codes not fully addressed in most articles; namely memory consumption, error control and runtime minimization. The aim of this poster is to contribute to all of these three points in the context of modern large scale parallel machines. Especially the used data structures, the parallelization approach and the precision-dependent parameter optimization will be discussed. The current code is able to compute all mutual long-range interactions of more than three trillion particles on 294.912 BG/P cores within a few minutes for an expansion up to quadrupoles. The maximum memory footprint of such a computation has been reduced to less than 45 Bytes per particle. The code employs a one-sided, non-blocking parallelization approach with a small communication overhead. Title Poster: 3D tixels: a highly efficient algorithm for gpu/cpu-acceleration of molecular dynamics on heterogeneous parallel architectures Abstract Several GPU-based algorithms have been developed to accelerate biomolecular simulations, but although they provide benefits over single-core implementations, they have not been able to surpass the performance of state-of-the art SIMD CPU implementations (e.g. GROMACS), not to mention efficient scaling. Here, we present a heterogenous parallelization that utilizes both CPU and GPU resources efficiently. A novel fixed-particle-number sub-cell algorithm for non-bonded force calculation was developed. The algorithm uses the SIMD width as algorithmic work unit, it is intrinsically future-proof since it can be adapted to future hardware. The CUDA non-bonded kernel implementation achieves up to 60\% work-efficiency, 1.5 IPC, and 95\% L1 cache utilization. On the CPU OpenMP-parallelized SSE-accelerated code runs overlapping with GPU execution. Fully automated dynamic inter-process as well as CPU-GPU load balancing is employed. We achieve threefold speedup compared to equivalent GROMACS CPU code and show good strong and weak scaling. To the best of our knowledge this the fastest GPU molecular dynamics implementation presented to date. Title Excited states in lattice QCD using the stochastic LapH method Abstract A new method for computing the mass spectrum of excited baryons and mesons from the temporal correlations of quantum-field operators in quantum chromodynamics is described. The correlations are determined using Markovchain Monte Carlo estimates of QCD path integrals formulated on an anisotropic space-time lattice. Access to the excited states of interest requires determinations of lower-lying multi-hadron state energies, necessitating the use of multi-hadron operators. Evaluating the correlations of such multi-hadron operators is difficult with standard methods. A new stochastic method of treating the low-lying modes of quark propagation which exploits a new procedure for spatially-smearing quark fields, known as Laplacian Heavi-side smearing, makes such calculations possible for the first time. Title Janus2: an FPGA-based supercomputer for spin glass simulations Abstract We describe the past and future of the Janus project. The collaboration started in 2006 and deployed in early 2008 the Janus supercomputer, a facility that allowed to speed-up Monte Carlo Simulations of a class of model glassy systems and provided unprecedented results for some paradigms in Statistical Mechanics. The Janus Supercomputer was based on state-of-the-art FPGA technology, and provided almost two order of magnitude improvement in terms of cost/performance and power/performance ratios. More than four years later, commercial facilities are closing-up in terms of performance, but FPGA technology has largely improved. A new generation supercomputer, Janus2, will be able to improve by more than one orders of magnitude with respect to the previous one, and will accordingly be again the best choice in Monte Carlo simulations of Spin Glasses for several years to come with respect to commercial solutions. Title Using ScanMatch scores to understand differences in eye movements between correct and incorrect solvers on physics problems Abstract Using a ScanMatch algorithm we investigate scan path differences between subjects who answer physics problems correctly and incorrectly. This algorithm bins a saccade sequence spatially and temporally, recodes this information to create a sequence of letters representing fixation location, duration and order, and compares two sequences to generate a similarity score. We recorded eye movements of 24 individuals on six physics problems containing diagrams with areas consistent with a novice-like response and areas of high perceptual salience. We calculated average ScanMatch similarity scores comparing correct solvers to one another (C-C), incorrect solvers to one another (I-I), and correct solvers to incorrect solvers (C-I). We found statistically significant differences between the C-C and I-I comparisons on only one of the problems. This seems to imply that top down processes relying on incorrect domain knowledge, rather than bottom up processes driven by perceptual salience, determine the eye movements of incorrect solvers. Title A dependency-driven formulation of parareal: parallel-in-time solution of PDEs as a many-task application Abstract Parareal is a novel algorithm that allows the solution of time-dependent systems of differential or partial differential equations (PDE) to be parallelized in the temporal domain. Parareal-based implementations of PDE problems can take advantage of this parallelism to significantly reduce the time to solution for a simulation (though at an increased total cost) while making effective use of the much larger processor counts available on current high-end systems. In this paper, we present a dynamic, dependency-driven version of the parareal algorithm which breaks the final sequential bottleneck remaining in the original formulation, making it amenable to a "many-task" treatment. We further improve the cost and execution time of the algorithm by introducing a moving window for time slices, which avoids the execution of tasks which contribute little to the final global solution. We describe how this approach has been realized in the Integrated Plasma Simulator (IPS), a framework for coupled multiphysics simulations, and examine the trade-offs among time-to-solution, total cost, and resource utilization efficiency as a function of the compute resources applied to the problem. Title Physics in motion: an interdisciplinary project Abstract Students in computer science and information technology should be engaged in solving real-world problems received from government and industry as well as those that expose them to various areas of application. This paper summarizes the results of an undergraduate research project between students in the Department of Information Sciences and Technology (IST) and the Department of Physics. Students were provided with a copy of Satellite Tool Kit®, a commercial software product, and asked to complete research and development tasks based on the concepts learned in a distributed computing course. This interdisciplinary and collaborative effort provided challenges, lessons learned and positive experiences for future development. Title ELI-ALPS: the ultrafast challenges in Hungary Abstract The ELI -- Extreme Light Infrastructure -- or as it is commonly referred to: the SUPERLASER will be one of the large research facilities of the European Union. ELI will be built with a joint international effort to form an integrated infrastructure comprised of three branches. The ELI Beamline Facility (Prague, Czech Republic) will mainly focus on particle acceleration and X-ray generation, while the ELI Nuclear Physics Facility (Magurele, Romania) will be dealing with laser-based nuclear physics as well as high field physics. In the talk we introduce the ELI Attosecond Light Pulse Source (ELI-ALPS) to be built in Szeged, Hungary. The primary mission of the ELI-ALPS Research Infrastructure is to provide the international scientific community with a broad range of ultrafast light sources, especially with coherent XUV and X-ray radiation, including single attosecond pulses. Thanks to this combination of parameters never achieved before, energetic attosecond X-ray pulses of ELI-ALPS will enable recording freeze-frame images of the dynamical electronic-structural behaviour of complex atomic, molecular and condensed matter systems, with attosecond-picometer resolution. The secondary purpose is to contribute to the scientific and technological development towards generating 200 PW pulses, being the ultimate goal of the ELI project. ELI-ALPS will be operated also as a user facility and hence serve basic and applied research in physical, chemical, material and biomedical sciences as well as industrial applications. The Facility will be built by the end of 2015 from a budget exceeding 240M EUR. The building and the IT infrastructure, from high speed internal networking, remote controlled system alignment, targetry and data aquisition through laser and radiation safety tools until security systems, will challenge the state of the art of similar research facilities. Title Visualization of multiscale simulation data: brain blood flow Abstract Accurately modeling many physical and biological systems requires simulating at multiple scales. This results in large heterogeneous data sets on vastly differing scales, both physical and temporal. To address the challenges in multi-scale data analysis and visualization we have developed and successfully applied a set of tools, which we described in this paper. Title Petascale kinetic simulation of the magnetosphere Abstract In this paper, we describe our latest advances in space weather studies based on petascale simulations and novel analysis techniques that we have developed. CCS Applied computing Physical sciences and engineering Mathematics and statistics Title Multi-container loading with non-convex 3D shapes using a GA/TS hybrid Abstract A genetic algorithm is developed for a multi-container problem and integrated into a commercial software product. The considered problem is characterized by specific requirements, e.g. non-convex 3D shapes composed of several cuboids and a broad range of constraints. The algorithm uses the packing list as genotype, the first-fit heuristic for placing the items, and a set of problem-specific operators. The algorithm is tested on simple examples, benchmarks by Bischoff/Ratcliff and Loh/Nee, and real-world customer data. The proposed algorithm proves to be an all-rounder that excels on non-convex problems and delivers acceptable results on regular (benchmark) problems. Title Fast visibility analysis in 3D procedural modeling environments Abstract This paper presents a unique solution to the visibility problem in 3D urban environments generated by procedural modeling. We shall introduce a visibility algorithm for a 3D urban environment, consisting of mass modeling shapes. Mass modeling consists of basic shape vocabulary with a box as the basic structure. Using boxes as simple mass model shapes, one can generate basic building blocks such as Title Object-image correspondence for curves under central and parallel projections Abstract We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence problem of planar curves under affine and projective transformations. The latter problem is then solved using a separating set of rational differential invariants. A similar approach can be used to solve the projection problem for finite lists of points. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. Title Using weighted norms to find nearest polynomials satisfying linear constraints Abstract This paper extends earlier results on finding nearest polynomials, expressed in various polynomial bases, satisfying linear constraints. Results are extended to different bases, including Hermite interpolational bases (not to be confused with the Hermite orthogonal polynomials). Results are also extended to the case of weighted norms, which turns out to be slightly nontrivial, and interesting in practice. Title Dynamic river network simulation at large scale Abstract Fully dynamic modeling of large scale river networks is still a challenge. In this paper we describe SPRINT, an inter-disciplinary collaborative effort between computer engineering and hydroscience to address the computational aspect of this challenge. Although algorithmic details differ, SPRINT draws many design considerations from SPICE, one of the most fundamental EDA tools. Experimental results demonstrate that SPRINT is capable of simulating large river basins at over 100x faster than real time. Title ComPLx: A Competitive Primal-dual Lagrange Optimization for Global Placement Abstract We develop a projected-subgradient primal-dual Lagrange optimization for global placement, that can be instantiated with a variety of interconnect models. It decomposes the original non-convex problem into\more convex"sub-problems. It generalizes the recent SimPL, SimPLR and Ripple algo- rithms and extends them. Empirically, ComPLx outper- forms all published placers in runtime and performance on ISPD 2005 and 2006 benchmarks. Title Rationing problems in bipartite networks Abstract The standard theory of rationing problems is extended to the bipartite context. The focus is on Title A rigorous and customizable framework for privacy Abstract In this paper we introduce a new and general privacy framework called Pufferfish. The Pufferfish framework can be used to create new privacy definitions that are customized to the needs of a given application. The goal of Pufferfish is to allow experts in an application domain, who frequently do not have expertise in privacy, to develop rigorous privacy definitions for their data sharing needs. In addition to this, the Pufferfish framework can also be used to study existing privacy definitions. We illustrate the benefits with several applications of this privacy framework: we use it to formalize and prove the statement that differential privacy assumes independence between records, we use it to define and study the notion of Title Differential privacy in data publication and analysis Abstract Data privacy has been an important research topic in the security, theory and database communities in the last few decades. However, many existing studies have restrictive assumptions regarding the adversary's prior knowledge, meaning that they preserve individuals' privacy only when the adversary has rather limited background information about the sensitive data, or only uses certain kinds of attacks. Recently, Title DP-tree: indexing multi-dimensional data under differential privacy (abstract only) Abstract e-differential privacy (e-DP) is a strong and rigorous scheme for protecting individuals' privacy while releasing useful statistical information. The main idea is to inject random noise into the results of statistical queries, such that the existence of any single record has negligible impact on the distributions of query results. The accuracy of such randomized results depends heavily upon the query processing technique, which has been an active research topic in recent years. So far, most existing methods focus on 1-dimensional queries. The only work that handles multi-dimensional query processing under e-DP is [1], which indexes the sensitive data using variants of the quad-tree and the k-d-tree. As we point out in this paper, these structures are inherently suboptimal for answering queries under e-DP. Consequently, the solutions in [1] suffer from several serious drawbacks, including limited and unstable query accuracy, as well as bias towards certain types of queries. Motivated by this, we propose the DP-tree, a novel index structure for multi-dimensional query processing under e-DP that eliminates the problems encountered by the methods in [1]. Further, we show that the effectiveness of the DP-tree can be improved using statistical information about the query workload. Extensive experiments using real and synthetic datasets confirm that the DP-tree achieves significantly higher query accuracy than existing methods. Interestingly, an adaptation of the DP-tree also outperforms previous 1D solutions in their restricted scope, by large margins. CCS Applied computing Physical sciences and engineering Electronics CCS Applied computing Physical sciences and engineering Telecommunications CCS Applied computing Life and medical sciences Computational biology CCS Applied computing Life and medical sciences Genomics CCS Applied computing Life and medical sciences Systems biology Title Boosting-based discovery of multi-component physiological indicators: applications to express diagnostics and personalized treatment optimization Abstract Increasing availability of multi-scale and multi-channel physiological data opens new horizons for quantitative modeling in medicine. However, practical limitations of existing approaches include both the low accuracy of the simplified analytical models and empirical expert-defined rules and the insufficient interpretability and stability of the pure data-driven models. Such challenges are typical for automated diagnostics from high-resolution image data and multi-channel temporal physiological information available in modern clinical settings. In addition, increasing number of portable and wearable systems for collection of physiological data outside medical facilities provide an opportunity for express and remote diagnostics as well as early detection of irregular and transient patterns caused by developing abnormalities or subtle initial effects of new treatments. However, quantitative modeling in such applications is even more challenging due to obvious limitations on the number of data channels, increased noise and non-stationary nature of considered tasks. Methods from nonlinear dynamics (NLD) are natural modeling tools for adaptive biological systems with multiple feedback loops and are capable of inferring essential dynamic properties from just one or a small number of data channels. However, most NLD indicators require long periods of data for stable calculation which significantly limits their practical value. Many of these challenges in biomedical modeling could be overcome by boosting and similar ensemble learning techniques that are capable of discovering robust multi-component meta-models from existing simplified models and other incomplete empirical knowledge. Here we describe an application of this approach to a practical system for express diagnostics and early detection of treatment responses from short beat-to-beat heart rate (RR) time series. The proposed system could play a key role in many applications relevant to e-healthcare, personalized medicine, express and remote web-enabled diagnostics, decision support systems, treatment optimization and others. Title Retrieving information from the book of humanity: the personalized medicine data tsunami crashes on the beach of jeopardy Abstract From a mute but eloquent alphabet of 4 characters emerges a complex biological 'literature' whose highest expression is human existence. The rapidly advancing technologies of 'nextgen sequencing' will soon make it possible to inexpensively acquire and store the characters of our complete personal genetic instruction set and make it available for health assessment and disease management. This uniquely personal form of 'big data' brings with it challenges that will be discussed in this keynote presentation. Topics will include a brief introduction to the linguistic challenges of 'biology as literature', the impact of personal molecular variation on traditional approaches to disease prevention, diagnosis and treatment, and the challenges of information retrieval when a large volume of primary observations is made that is associated with an evanescent and rapidly changing corpus of scientific interpretation of those primary observations. Experience with extracting high quality pheonotypes from electronic medical records has shown that Natural Language Processing capability is an essential information extraction function for correlation of clinical events with personal genetic variation. Any powerful set of information can be used or misused, and put those who depend upon it in jeopardy. These issues, and a lesson from the long running Jeopardy TV series, will be discussed. Title Complex and diverse morphologies can develop from a minimal genomic model Abstract While development plays a critical role in the emergence of diversity, its mechanical and chemical actions are considered to be inextricably correlated with Title GAMIV: a genetic algorithm for identifying variable-lengthmotifs in noncoding DNA Abstract GAMI uses a genetic algorithm to identify putatively conserved motifs of a pre-selected length in noncoding DNA from diverse species. In this work, I present an extension to the system, GAMIV, that identifies putatively conserved motifs of variable length. The system begins with an initial set of very short motifs and allows them to grow through a pair of custom operators. A fitness function that rewards both motif conservation and motif length is used to evolve a population of conserved motifs of variable length. This paper describes the motivation for GAMIV, discusses the design of the system, and presents initial results for the system. Based on these initial results, GAMIV is a promising tool for the inference of variable-length motifs in noncoding DNA. Title Introduction to bioinformatics and computational biology Abstract The field of biological sciences has been transformed in recent years to a domain of incredibly rich data ripe for computational exploration. High throughput technologies allow investigators to construct vast feature sets, including genetic variables, gene expression values, protein levels, biomarkers, and a multitude of other traits. These rich feature sets can be used to predict disease risk and prioritize treatment strategies, but there are incredible challenges in the analysis of such data. Features often have correlation patterns, are subject to normalization, measurement errors, and other forms of noise. Furthermore, there are far more features in a typical dataset than there are samples. Despite these issues, the analysis of complex biological data can lead to a new understanding of biological systems and human health. This tutorial provides an introduction to the fundamental concepts of biological science. It examines the established methods for generating biological datasets, outlines online databases that contain much of this data, and introduces the newest methods for capturing high-resolution genomic sequence data. Attendees will leave this tutorial with a better understanding of the problem domains that exist in biological science. Title A dynamical model of cancer chemotherapy with disturbance Abstract This work proposes a controlled stochastic difference equation model of scheduling, with quadratic cost criteria, for cancer chemotherapy. By reducing the problem to quadratic control optimization and introducing a random search algorithm, we seek an optimal chemotherapy schedule. Our ultimate goal is to provide more realistic solutions than previous models. To reach this goal, our model ideally kills the maximum number of cancer cells to eradicate the disease while preserving the number of normal cells. Our results show the proposed model works well for cancer chemotherapy. Our algorithm is fast and helps produce practical schedules. Title The search for robust topologies of oscillatory gene regulatory networks by evolutionary computation Abstract Synthetic biology has yielded many successful basic modules inspired by electronic devices over the last ten years. However, there has been very limited success in designing higher order modules by assembling these simpler devices. The lack of robustness in these devices, fails them to work as successful parts in larger system in general. In this paper, we propose a evolutionary search method to construct a robust topology of gene regulatory network in which the concentrations of genes oscillate like a repressilator. This kind of oscillating network works as a "clock" in the biological system and has been in the center of attraction for long. However, the issue of robustness of the designed network remained less attended. Our genetic algorithm evolves oscillating gene regulatory network with a topology superior to other existing topologies in terms of robustness. Title ProRank: a method for detecting protein complexes Abstract Detecting protein complexes from protein-protein interaction (PPI) network is becoming a difficult challenge in computational biology. Observations show that genes causing the same or similar diseases tend to lie close to one another in a network of protein-protein or functional interactions. This paper introduces a novel method for detecting protein-complexes from PPI by using a protein ranking algorithm (ProRank) and incorporating an evolutionary relationships between proteins in the network. The method successfully predicted 57 out of 81 benchmarked protein complexes created from the Munich Information Center for Protein Sequence (MIPS). The level of the accuracy achieved using ProRank in comparison to other recent methods for detecting protein complexes is a strong argument in favor of our proposed method. Datasets, programs and results are available at http://faculty.uaeu.ac.ae/nzaki/ProRank.htm. Title Efficient algorithms for extracting biological key pathways with global constraints Abstract The integrated analysis of data of different types and with various interdependencies is one of the major challenges in computational biology. Recently, we developed KeyPathwayMiner, a method that combines biological networks modeled as graphs with disease-specific genetic expression data gained from a set of cases (patients, cell lines, tissues, etc.). We aimed for finding all maximal connected sub-graphs where all nodes but $K$ are expressed in all cases but at most $L$, i.e. key pathways. Thereby, we combined biological networks with OMICS data, instead of analyzing these data sets in isolation. Here we present an alternative approach that avoids a certain bias towards hub nodes: We now aim for extracting all maximal connected sub-networks where all but at most $K$ nodes are expressed in all cases but in total (!) at most $L$, i.e. accumulated over all cases and all nodes in a solution. We call this strategy GLONE (global node exceptions); the previous problem we call INES (individual node exceptions). Since finding GLONE-components is computationally hard, we developed an Ant Colony Optimization algorithm and implemented it with the KeyPathwayMiner Cytoscape framework as an alternative to the INES algorithms. KeyPathwayMiner 3.0 now offers both the INES and the GLONE algorithms. It is available as plugin from Cytoscape and online at http://keypathwayminer.mpi-inf.mpg.de. Title Instance-linked attribute tracking and feedback for michigan-style supervised learning classifier systems Abstract The application of learning classifier systems (LCSs) to classification and data mining in genetic association studies has been the target of previous work. Recent efforts have focused on: (1) correctly discriminating between predictive and non-predictive attributes, and (2) detecting and characterizing epistasis (attribute interaction) and heterogeneity. While the solutions evolved by Michigan-style LCSs (M-LCSs) are conceptually well suited to address these phenomena, the explicit characterization of heterogeneity remains a particular challenge. In this study we introduce attribute tracking, a mechanism akin to memory, for supervised learning in M-LCSs. Given a finite training set, a vector of accuracy scores is maintained for each instance in the data. Post-training, we apply these scores to characterize patterns of association in the dataset. Additionally we introduce attribute feedback to the mutation and crossover mechanisms, probabilistically directing rule generalization based on an instance's tracking scores. We find that attribute tracking combined with clustering and visualization facilitates the characterization of epistasis and heterogeneity while uniquely linking individual instances in the dataset to etiologically heterogeneous subgroups. Moreover, these analyses demonstrate that attribute feedback significantly improves test accuracy, efficient generalization, run time, and the power to discriminate between predictive and non-predictive attributes in the presence of heterogeneity. CCS Applied computing Life and medical sciences Consumer health Title Characterizing mammography reports for health analytics Abstract As massive collections of digital health data are becoming available, the opportunities for large scale automated analysis increase. In particular, the widespread collection of detailed health information is expected to help realize a vision of evidence-based public health and patient-centric health care. Within such a framework for large scale health analytics we describe several methods to characterize and analyze free-text mammography reports, including their temporal dimension, using information retrieval, supervised learning, and classical statistical techniques. We present experimental results with a large collection of mostly unlabeled reports that demonstrate the validity and usefulness of the approach, since these results are consistent with the known features of the data and provide novel insights about it. Title Shape discrimination test on handheld devices for patient self-test Abstract Timely treatment for the patients with early stage diabetic retinopathy (DR) before transition to proliferative diabetic retinopathy (PDR) or diabetic macular edema (DME) depends on detection of symptoms in time. Any test capable of achieving this must be simple, accessible and reliable enough for patients to administer themselves. Shape discrimination test has the potential to allow for timely detection of PDR or DME. Moreover, implementation of this test on handheld devices can present a feasible solution for home self-test of these conditions. The paper presents an iPod/iPhone implementation of the test method and reports initial experimental results. The software solutions include proper UI considerations like simplicity and visual clarity for patients with eye problems, and also the provision for local and remote extraction of test data. Title Health information and decision-making preferences in the internet age: a pilot study using the health information wants (HIW) questionnaire Abstract Recent paradigm shift in health care calls for more attention to patient preferences. The Health Information Wants (HIW) Questionnaire measures patients' preferences (desires) for health information and participation in decision-making. It has parallel items in seven corresponding areas of information and decision-making (diagnosis, treatment, laboratory test, self-care, complementary and alternative medicine, psychosocial, and health care provider). A pilot study was conducted to generate preliminary data about the psychometric property of this instrument, the relationships between information and decision-making preferences in each of the seven areas, and the relationships among Internet use, age, and preferences for each type of health information and decision-making. The results show that the HIW Questionnaire has strong reliability and validity. After controlling for gender, education, perception of severity, and health, the overall preferences for health information and decision-making were positively correlated. Multilevel modeling analysis results showed that age was negatively related to the overall preference ratings. The differences in decision-making preference ratings between young and older adults were greater than those in information preference ratings. Internet use frequency was not significantly related to preference ratings. The relationships examined varied across the seven subscales (e.g., on the diagnosis subscale, age was positively associated with diagnostic decision-making preferences). These findings have implications for a better understanding of patient preferences, patient-provider relationships, and the quality of health care. Title VPW: an interactive prototype of a web-based visual paired comparison cognitive diagnostic test Abstract The Visual Paired Comparison (VPC) task is widely used to measure recognition memory in psychology and neuroscience research. Recently, the VPC task has shown promise as a diagnostic for the amnestic subtype of Mild Cognitive Impairment (aMCI). Patients diagnosed with aMCI are at an increased risk for developing dementia, especially Alzheimer's disease. However, current implementations of VPC require eye tracking equipment, which is costly and not widely available. This demonstration shows our early prototype of a Web-based version of the VPC task, the Title Health score prediction using low-invasive sensors Abstract Scores of health state for elderly people are regarded as important in nursing or medical fields. On the other hand, gaining the scores needs nurses to execute questionnaires. Owing to this, the execution rate for the health assessment is still low in ordinary homes. To solve this problem, we propose a method to predict the health score by using low-invasive sensors. We adopt regression as the prediction method and construct features to absorb the individual difference. As a part of feasibility study of social participation for elderly people, we execute the survey of health state using questionnaires by a nurse and install low-invasive sensors in real life simultaneously. Experimental result in the feasibility study shows a promise of the score prediction from sensor data. In addition, the result suggests that the extraction of features related to living behaviors improves the accuracy compared to using raw sensor data. Title Towards heterogeneous temporal clinical event pattern discovery: a convolutional approach Abstract Large collections of electronic clinical records today provide us with a vast source of information on medical practice. However, the utilization of those data for exploratory analysis to support clinical decisions is still limited. Extracting useful patterns from such data is particularly challenging because it is Title A study on damping profile for prosthetic knee Abstract An intelligent prosthetic leg for above knee amputee person has been developed by Indian Institute of Information Technology -Allahabad. The leg has been called as Adaptive Modular Active Leg (AMAL). The main aim of this paper was to generate suitable damping profiles required for above knee prosthetic patients for locomotion. A detailed analysis of human gait cycle is needed to provide damping profiles to the prosthetic knee. This information is obtained from the healthy leg. A simple potentiometer sensor is fitted beside the healthy knee to measure the knee angle and strain gauges mounted below the heel, in the shoe to measure gait strain. These signals from the knee and the heel are the input that describe the gait cycle of the patient. These two signal values are cleaned using Kalman filter to reduce the sensory noise for providing better performance to our system. Human gait cycle is divided into six different phases to evaluate damping profiles. In this paper, we formulate six different damping equations to produce damping profiles for prosthetic knee. The Artificial Neural Network has been used to classify different phase of walking cycle with suitable damping value. Title An optimal reconstruction of the human arterial tree from doppler echotracking measurements Abstract Starting from non invasive experimental measurements by Doppler echotracking, the human arterial tree of a given patient is numerically reconstructed.The chosen approach consists in building a simplified fluid/structure interaction model for each artery and to find the parameters of the network by solving an inverse problem. The first reconstruction results of the lower arterial tree of a healthy patient are given and show a very good agreement with the echotracking measurements. Such numerical reconstruction, that includes in particular the estimation of the stiffness of each artery, will help for an early diagnosis of cardiovascular diseases. Title A real-time architecture for detection of diseases using social networks: design, implementation and evaluation Abstract In this work we developed a surveillance architecture to detect diseases-related postings in social networks using Twitter as an example for a high-traffic social network. Our real-time architecture uses Twitter streaming API to crawl Twitter messages as they are posted. Data mining techniques have been used to index, extract and classify postings. Finally, we evaluate the performance of the classifier with a dataset of public health postings and also evaluate the run-time performance of whole system with respect to latency and throughput. Title Ultraviolet guardian - real time ultraviolet monitoring: estimating the pedestrians ultraviolet exposure before stepping outdoors Abstract Geographic location, environmental properties, and altitude are factors that contribute to the increase or decrease of the pedestrians ultraviolet (UV) exposure. Over-exposure can cause severe skin damage, possibly leading to skin cancer. This work proposes an algorithm for estimating the pedestrians UVA and UVB exposure along a path in an urban environment before stepping outdoors. The algorithm is incorporated into our UV Guardian (UVG) system. For all pedestrian path walk experiments conducted, results show that the proposed algorithm estimates the pedestrians UVB exposure with 94% accuracy. For UVA exposure estimation, the accuracy is 71%. CCS Applied computing Life and medical sciences Health care information systems Title Beyond safe harbor: automatic discovery of health information de-identification policy alternatives Abstract Regulations in various countries permit the reuse of health information without patient authorization provided the data is "de-identified". In the United States, for instance, the Privacy Rule of the Health Insurance Portability and Accountability Act defines two distinct approaches to achieve de-identification; the first is Title A framework to model and translate clinical rules to support complex real-time analysis of physiological and clinical data Abstract We present a framework to model and translate clinical rules to support complex real-time analysis of both synchronous physiological data and asynchronous clinical data. The framework is demonstrated through a case study in a neonatal intensive care context showing how a clinical rule for detecting an apnoeic event is modeled across multiple physiological data streams in the Artemis environment, which employs IBM's InfoSphere Streams middleware to support real-time stream processing. Initial clinical hypotheses for apnoea detection are modeled using UML activity diagrams which are subsequently translated into Stream's SPADE code to be deployed in Artemis to deliver real-time decision support. Our aim is to provide a Clinical Decision Support System capable of identifying and detecting patterns in physiological data streams indicative of the onset of clinically significant conditions that that may adversely affect health outcomes. Benefits associated with our approach include: 1) reduced time and effort on the clinician's part to assess health data from multiple sources; 2) the ability to allow clinicians to control the rules-engine of Artemis to enhance clinical care within their unique environments; 3) the ability to apply clinical alerts to both synchronous and asynchronous data; and 4) the ability to continuously process data in real-time. Title Medical decision making using vector space model Abstract This paper addresses the task of analyzing healthcare data for medical decision making. We describe a method for ranking medications based on historical data of the outcomes recorded as part of a system of Electronic Medical Records (EMR). Medication ranking can be used to recommend medications for a given group of diagnoses. The ranking process captures the effects of medication and subsequent diagnoses. We used longitudinal electronic medical records of five test patients for the purpose of this study. More than 5000 medical visit documents are analyzed and the medication and diagnosis information are extracted to create a vector space model. The resulting matrix ranked 167 medications and 187 problems. This is designed to enable the decision making capabilities within EMRs. Similar approaches can be used to provide decision support towards preventive medication. Title Increasing patient safety using explanation-driven personalized content recommendation Abstract In this paper we describe a novel approach of increasing patients' safety using explanation-driven personalized content recommendation. In this approach, patients are exposed to relevant medical information that is continuously gathered from various web sources and timely delivered, educating patients toward better preventive medicine decision making. Personalized content recommendations are further accompanied with detailed explanations that disclose useful information about each recommendation, helping patients to better utilize the recommended content for improving their safety. As a motivation for this approach, we describe the application of personalized ADE alerts. We then describe the recommendation system that we implement for the Title A bricolage perspective on healthcare information systems design: an improvisation model Abstract Pressured by escalating costs, continual demand for high quality, and the speed of technological advances, the need for change and improvisation has become a critical priority for the healthcare industry. Now society demands that healthcare providers offer better patient care through the careful use of information technologies. For that, practitioners are urged to expand the boundaries of innovative IS design strategies. Unfortunately, research on healthcare information systems (HIS) improvisation remains relatively underdeveloped. Thus, this study uses the organizational improvisation and bricolage theoretical lenses, from the perspective of a case study, to examine how strategic improvisation might give rise to fruitful HIS novel design performances. Theoretically, we provide an inductively derived strategic conceptual model of improvisation that couples with network, structure, and institutional bricolage to execute a 'resource-time-effort' model. This enables us to improvise a superior HIS that offers quality patient-centric healthcare delivery and a valuable improvisation model. Professionally, this study contributes three key insights for IS improvisation in the healthcare industry. Title Large-scale multimodal mining for healthcare with mapreduce Abstract Recent advances in healthcare and bioscience technologies and proliferation of portable medical devices have produce massive amount of multimodal data, the need for parallel processing is apparent for mining these data sets, which can range anywhere from tens of gigabytes, to terabytes or even petabytes. AALIM (Advanced Analytics for Information Management) is a new multimodal mining-based clinical decision support system that brings together patient data captured in many modalities to provide a holistic presentation of a patient's exam data, diseases, and medications. In addition, it offers disease-specific similarity search based on the various data modalities. The current deployed AALIM system is only able to process limited amount of patient data per day. In this paper, we attempt to address this challenge of building a healthcare multimodal mining system on top of the MapReduce framework, specifically its popular open-source implementation, Hadoop. We presented a scalable and generic framework that enables automatic parallelization of the healthcare multimodal mining algorithm, and distribution of large-scale computation that achieves high performance on clusters of commodity servers. Initial testing of importing a single AALIM module (EKG period estimation) using Hadoop on a cluster of servers shows very promising results. Title Towards large-scale sharing of electronic health records of cancer patients Abstract The rising cost of healthcare is one of the major concerns faced by the nation. One way to lower healthcare costs and provide better quality care to patients is through the effective use of Information Technology (IT). Data sharing and collaboration and large-scale management of healthcare data have been identified as important IT challenges to advance the nation's healthcare system. In this paper, we present an overview of the software framework called CDN (Collaborative Data Network) that we are developing for large-scale sharing of electronic health records (EHR). In this on-going effort, we focus on sharing EHRs of cancer patients. Cancer is the second leading cause of deaths in the US. CDN is based on the synergistic combination of peer-to-peer technology and the extensible markup language XML and XQuery. We outline the key challenges that arise when sharing evolving, heterogeneous repositories and processing queries across multiple repositories. We present the novel architecture of CDN to overcome these challenges and discuss our plan for implementation, evaluation, and deployment. Title An animated multivariate visualization for physiological and clinical data in the ICU Abstract Current visualizations of electronic medical data in the Intensive Care Unit (ICU) consist of stacked univariate plots of variables over time and a tabular display of the current numeric values for the corresponding variables and occasionally an alarm limit. The value of information is dependent upon knowledge of historic values to determine a change in state. With the ability to acquire more historic information, providers need more sophisticated visualization tools to assist them in analyzing the data in a multivariate fashion over time. We present a multivariate time series visualization that is interactive and animated, and has proven to be as effective as current methods in the ICU for predicting an episode of acute hypotension in terms of accuracy, confidence, and efficiency with only 30-60 minutes of training. Title EPharmacyNet: an approach to improve the pharmaceutical care delivery in developing countries-study case-BENIN Abstract One of the problems in health care in developing countries is the bad accessibility of medicine in pharmacies for patients. Since this is mainly due to a lack of organization and information, it should be possible to improve the situation by introducing information and communication technology. However, for several reasons, standard solutions are not applicable here. In this paper, we describe a case study in Benin, a West African developing country. We identify the problem and the existing obstacles for applying standard ECommerce solutions. Then we describe an adapted system approach and a practical test which has shown that the approach has the potential of actually improving the pharmaceutical care delivery, i.e. improving the distribution of medicine in developing countries, mainly in rural regions. Title An information and communication technology system to support rural healthcare delivery Abstract Using information and communication technology (ICT) is a promising solution to assist and alleviate some of the existing problems in rural healthcare such as distance, lower number of care providers, communication, information integration, etc. In this paper, we review current work in the area and share our experience of designing a system to provide better continuity of care to patients in rural areas. We have interviewed physicians who work in rural areas, performed site visits, and analyzed the current paper-based system to better understand existing problems and consequently design a better system to tackle the problems. CCS Applied computing Life and medical sciences Health informatics Title Characterizing mammography reports for health analytics Abstract As massive collections of digital health data are becoming available, the opportunities for large scale automated analysis increase. In particular, the widespread collection of detailed health information is expected to help realize a vision of evidence-based public health and patient-centric health care. Within such a framework for large scale health analytics we describe several methods to characterize and analyze free-text mammography reports, including their temporal dimension, using information retrieval, supervised learning, and classical statistical techniques. We present experimental results with a large collection of mostly unlabeled reports that demonstrate the validity and usefulness of the approach, since these results are consistent with the known features of the data and provide novel insights about it. Title Shape discrimination test on handheld devices for patient self-test Abstract Timely treatment for the patients with early stage diabetic retinopathy (DR) before transition to proliferative diabetic retinopathy (PDR) or diabetic macular edema (DME) depends on detection of symptoms in time. Any test capable of achieving this must be simple, accessible and reliable enough for patients to administer themselves. Shape discrimination test has the potential to allow for timely detection of PDR or DME. Moreover, implementation of this test on handheld devices can present a feasible solution for home self-test of these conditions. The paper presents an iPod/iPhone implementation of the test method and reports initial experimental results. The software solutions include proper UI considerations like simplicity and visual clarity for patients with eye problems, and also the provision for local and remote extraction of test data. Title Health information and decision-making preferences in the internet age: a pilot study using the health information wants (HIW) questionnaire Abstract Recent paradigm shift in health care calls for more attention to patient preferences. The Health Information Wants (HIW) Questionnaire measures patients' preferences (desires) for health information and participation in decision-making. It has parallel items in seven corresponding areas of information and decision-making (diagnosis, treatment, laboratory test, self-care, complementary and alternative medicine, psychosocial, and health care provider). A pilot study was conducted to generate preliminary data about the psychometric property of this instrument, the relationships between information and decision-making preferences in each of the seven areas, and the relationships among Internet use, age, and preferences for each type of health information and decision-making. The results show that the HIW Questionnaire has strong reliability and validity. After controlling for gender, education, perception of severity, and health, the overall preferences for health information and decision-making were positively correlated. Multilevel modeling analysis results showed that age was negatively related to the overall preference ratings. The differences in decision-making preference ratings between young and older adults were greater than those in information preference ratings. Internet use frequency was not significantly related to preference ratings. The relationships examined varied across the seven subscales (e.g., on the diagnosis subscale, age was positively associated with diagnostic decision-making preferences). These findings have implications for a better understanding of patient preferences, patient-provider relationships, and the quality of health care. Title VPW: an interactive prototype of a web-based visual paired comparison cognitive diagnostic test Abstract The Visual Paired Comparison (VPC) task is widely used to measure recognition memory in psychology and neuroscience research. Recently, the VPC task has shown promise as a diagnostic for the amnestic subtype of Mild Cognitive Impairment (aMCI). Patients diagnosed with aMCI are at an increased risk for developing dementia, especially Alzheimer's disease. However, current implementations of VPC require eye tracking equipment, which is costly and not widely available. This demonstration shows our early prototype of a Web-based version of the VPC task, the Title Health score prediction using low-invasive sensors Abstract Scores of health state for elderly people are regarded as important in nursing or medical fields. On the other hand, gaining the scores needs nurses to execute questionnaires. Owing to this, the execution rate for the health assessment is still low in ordinary homes. To solve this problem, we propose a method to predict the health score by using low-invasive sensors. We adopt regression as the prediction method and construct features to absorb the individual difference. As a part of feasibility study of social participation for elderly people, we execute the survey of health state using questionnaires by a nurse and install low-invasive sensors in real life simultaneously. Experimental result in the feasibility study shows a promise of the score prediction from sensor data. In addition, the result suggests that the extraction of features related to living behaviors improves the accuracy compared to using raw sensor data. Title Towards heterogeneous temporal clinical event pattern discovery: a convolutional approach Abstract Large collections of electronic clinical records today provide us with a vast source of information on medical practice. However, the utilization of those data for exploratory analysis to support clinical decisions is still limited. Extracting useful patterns from such data is particularly challenging because it is Title A study on damping profile for prosthetic knee Abstract An intelligent prosthetic leg for above knee amputee person has been developed by Indian Institute of Information Technology -Allahabad. The leg has been called as Adaptive Modular Active Leg (AMAL). The main aim of this paper was to generate suitable damping profiles required for above knee prosthetic patients for locomotion. A detailed analysis of human gait cycle is needed to provide damping profiles to the prosthetic knee. This information is obtained from the healthy leg. A simple potentiometer sensor is fitted beside the healthy knee to measure the knee angle and strain gauges mounted below the heel, in the shoe to measure gait strain. These signals from the knee and the heel are the input that describe the gait cycle of the patient. These two signal values are cleaned using Kalman filter to reduce the sensory noise for providing better performance to our system. Human gait cycle is divided into six different phases to evaluate damping profiles. In this paper, we formulate six different damping equations to produce damping profiles for prosthetic knee. The Artificial Neural Network has been used to classify different phase of walking cycle with suitable damping value. Title An optimal reconstruction of the human arterial tree from doppler echotracking measurements Abstract Starting from non invasive experimental measurements by Doppler echotracking, the human arterial tree of a given patient is numerically reconstructed.The chosen approach consists in building a simplified fluid/structure interaction model for each artery and to find the parameters of the network by solving an inverse problem. The first reconstruction results of the lower arterial tree of a healthy patient are given and show a very good agreement with the echotracking measurements. Such numerical reconstruction, that includes in particular the estimation of the stiffness of each artery, will help for an early diagnosis of cardiovascular diseases. Title A real-time architecture for detection of diseases using social networks: design, implementation and evaluation Abstract In this work we developed a surveillance architecture to detect diseases-related postings in social networks using Twitter as an example for a high-traffic social network. Our real-time architecture uses Twitter streaming API to crawl Twitter messages as they are posted. Data mining techniques have been used to index, extract and classify postings. Finally, we evaluate the performance of the classifier with a dataset of public health postings and also evaluate the run-time performance of whole system with respect to latency and throughput. Title Low-power fall detection in home-based environments Abstract Fall detection of the elderly becomes more critical in an aging society. However, how to put forward fall detection with reliability and high accuracy while maintaining real-time and energy-efficiency is an important issue. To this end, we design and implement an energy-efficient prototype called Asgard, in which a fall detection algorithm and a hybrid energy-efficient strategy are proposed. The algorithm, which can flexibly track the body change by recovery angle detection, helps to reduce the false positive phenomenon as well as detection time (DT). Results of comprehensive evaluations show the accuracy rate of 96.25%, which is higher than AMD (Advanced Magnitude Detection). More notably, the prototype still has low DT with the aforementioned accuracy. More precisely, with the proposed hybrid energy-efficient algorithm, Asgard functions well for approximately one month using only two AA batteries (1500mAH each). CCS Applied computing Life and medical sciences Bioinformatics CCS Applied computing Life and medical sciences Metabolomics / metabonomics CCS Applied computing Life and medical sciences Genetics CCS Applied computing Law, social and behavioral sciences Anthropology CCS Applied computing Law, social and behavioral sciences Law Title Patent information retrieval: an instance of domain-specific search Abstract The tutorial aims to provide the IR researchers with an understanding of how the patent system works, the challenges that patent searchers face in using the existing tools and in adopting new methods developed in academia. At the same time, the tutorial will inform the IR researcher about the unique opportunities that the patent domain provides: a large amount of multi-lingual and multi-modal documents, the widest possible span of covered domains, a highly annotated corpus and, very importantly, relevance judgements created by experts in the fields and recorded electronically in the documents. The combination of these two objectives leads to the main purpose of the tutorial: to create awareness and to encourage more emphasis on the patent domain in the IR community. Table 1 provides details on how the tutorial covers the topics of the SIGIR conference. Title The Law, the Computer, and the Mind: An interview with Roy Freed Abstract 2011 marked the 50th anniversary of the first educational program on computer law, sponsored by the Joint Committee on Continuing Professional Education of the American Law Institute and the American Bar Association (ALI-ABA). In 1971 at an ACM conference, Roy Freed and six colleagues founded the Computer Law Association (CLA), an international bar association (renamed later as the International Technology Law Association). Title Application of the MINOE regulatory analysis framework: case studies Abstract In this paper, we describe a tool to help holistically understand, research and analyze the relationship between an ecosystem model and the relevant laws. Specifically, a software, MINOE, is being developed to address the needs to identify gaps, overlaps and linkages in the increasingly fragmented set of ocean-related laws. MINOE requires two pieces of information from the users, namely an ecosystem model, and a set of laws and its associated metadata, to perform the analysis. The output from MINOE is a searchable collection of laws organized by ecosystem relationships. Additionally, various visualization modules have been developed to help users synthesize the results for gap and overlap analyses. Two current usage examples are documented to illustrate the potential use of MINOE on legislation and management research. Title The systematization of law in terms of the validity Abstract In legal praxis, it is important to decide what legal relations exist in a legal problem-event on the one hand and to decide what legal rules are applicable to decide it in terms of the validation by contract through constitution or convention on the other hand. These dimensions are strongly related with each other. This paper clarifies the logical structure of a legal system to decide the above two dimension in unified reasoning in terms of the validity of legal sentences. It provides a logical model of reasoning the validity of legal sentences for a unified legal reasoning system, in which legal relations according to the time progress of legal problem-events are decided and at the same time the applicability of relevant legal rules to decide them is decided. We demonstrate the legitimacy and efficiency of this model by applying it to concrete examples and showing how legal meta-sentences and legal meta-inference work in this model. Title Catching Gray Cygnets: an initial exploration Abstract In this paper, we describe exploratory experiments for detecting potential "Gray Cygnet" cases that follow a known Black Swan. Gray Cygnets (GCs) are cases that are highly similar and subsequent to novel, surprising, provocative, exceptional cases, so-called Black Swans. They too are surprising, exceptional and provocative in the sense of continuing the change initiated by the Black Swan. Our experiments were carried out using a corpus of common law cases from the United States, particularly New York and Massachusetts, and the United Kingdom primarily in the era 1852-1916 during which there was dramatic change in the prevailing doctrine regarding recovery for damages by a remote buyer. It was provoked by the 1852 landmark case Title A corpus of Australian contract language: description, profiling and analysis Abstract Written contracts are a fundamental framework for economic and cooperative transactions in society. Little work has been reported on the application of natural language processing or corpus linguistics to contracts. In this paper we report the design, profiling and initial analysis of a corpus of Australian contract language. This corpus enables a quantitative and qualitative characterisation of Australian contract language as an input to the development of contract drafting tools. Profiling of the corpus is consistent with its suitability for use in language engineering applications. We provide descriptive statistics for the corpus and show that document length and document vocabulary size approximate to log normal distributions. The corpus conforms to Zipf's law and comparative type to token ratios are consistent with lower term sparsity (an expectation for legal language). We highlight distinctive term usage in Australian contract language. Results derived from the corpus indicate a longer prepositional phrase depth in sentences in contract rules extracted from the corpus, as compared to other corpora. Title A method for explaining and predicting trends: an application to the Dutch justice system Abstract A method, named Title Causal argumentation schemes to support sense-making in clinical genetics and law Abstract With some sense-making software, investigators can use causal networks to visualize possible stories explaining the evidence. Despite the different domains, there are interesting correspondences between that type of application and a proposed intelligent learning environment (ILE) in which science students could visualize and debate causal scenarios accounting for clinical findings. The proposed ILE will extend the design of the GenIE Assistant, a system to generate first-draft genetic counseling letters. This paper compares the underlying computational models of sense-making software and the GenIE Assistant. Then it discusses refinements of the Assistant's causal argumentation schemes to support debate in the ILE. The refinements are at a level of abstraction that seem applicable to computational models for sense-making and evidential reasoning in law. Title Legal shifts in the process of proof Abstract In this paper, we continue our research on a hybrid narrative-argumentative approach to evidential reasoning in the law by showing the interaction between factual reasoning and legal reasoning. We therefore emphasize the role of legal story schemes (as opposed to factual story schemes that formed the heart of our previous proposal). Legal story schemes steer what needs to be proven, but are also selected on the basis of what can be proven. They provide a coherent, holistic legal perspective on a criminal case that steers investigation and decision making. We present an extension of our previously proposed hybrid theory of reasoning with evidence, by making the connection with reasoning towards legal consequences. We discuss the phenomenon of legal shifts that shows that the step from evidence to (proven) facts cannot be isolated from the step from proven facts to legal consequences. We show how legal shifts can be modelled in terms of legal story schemes. Our model is illustrated by a discussion of the Dutch Wamel murder case. Title Adapting specialized legal metadata to the digital environment: the code of federal regulations parallel table of authorities and rules Abstract In the domain of print-based U.S. legal information, specialized tools that create connections between different categories of metadata increase legal research efficiency. Such tools, redesigned for the electronic sphere, could enhance digital legal information systems. This paper illustrates this kind of redesign, through a case study of one such tool---the CCS Applied computing Law, social and behavioral sciences Psychology Title Analysis and classification of conversational interactions Abstract Title How can I help you today?: the knowledge work of call center agents Abstract This paper reflects on an industry case study conducted in two outsourced call centers to explore the human side of their turnover problem. At the project's onset, management did not consider it necessary to get input from their agents as they already had a thorough knowledge of their organization's operations based on financial analyses and employee surveys. However when we brought back examples from the field showing agent work as complex, dynamic, stressful knowledge work, management began to see the value of soliciting input from their front-line employees. What started as a turnover investigation resulted in an organizational learning initiative to capture and propagate the "human" side of call center work. In the end, we shadowed agents through their shift to create "A day in the life of a call agent" video documentary so that organization-wide all could appreciate the complexity of call agent work. Title BC-PDM: data mining, social network analysis and text mining system based on cloud computing Abstract Telecom BI(Business Intelligence) system consists of a set of application programs and technologies for gathering, storing, analyzing and providing access to data, which contribute to manage business information and make decision precisely. However, traditional analysis algorithms meet new challenges as the continued exponential growth in both the volume and the complexity of telecom data. With the Cloud Computing development, some parallel data analysis systems have been emerging. However, existing systems have rarely comprehensive function, either providing data analysis service or providing social network analysis. We need a comprehensive tool to store and analysis large scale data efficiently. In response to the challenge, the SaaS (Software-as-a-Service) BI system, Title There is more than complex contagion: an indirect influence analysis on Twitter Abstract Social influence in social networks has been extensively researched. Most studies have focused on direct influence, while another interesting question can be raised as Title ICT influence on foreign wives' social integration into Singaporean society Abstract This research aims to understand the factors that lead to social exclusion among foreign wives in Singapore and the role that Information and Communication Technologies (ICTs) plays in social inclusion and empowerment. There has been a rising trend of migration through marriage in Singapore, especially between foreign brides and Singaporean men. Present literature shows that ICTs can be a source of social support to help migrants adapt to life in their host country [4]. We found that, although not a direct cause of empowerment, ICTs act as an agent to enhance social, political and economic inclusion. Through our paper we provide a starting point for discussion of recommendations. Title A cognitive analysis of the perception of shape and motion cooperation in virtual animations Abstract In order to better understand perceptual and cognitive features of shapes and motions associations, we first create synthetic animation composed of realistic motions modeled by physical modeling mapped on abstract shapes. Second, we propose such paradoxical and surprising animations to subject's observations and we analyze them by qualitative analysis methods. Title Multimodal learning with audio description: an eye tracking study of children's gaze during a visual recognition task Abstract The paper explores the effects of adding audio description to an educational film on children's learning behaviour, manifested by a visual recognition task. We hypothesize that the multimodal educational setting, consisting of both verbal (film dialogue and audio description) and non-verbal (motion pictures) representations of knowledge, fosters knowledge acquisition as it provides information via multiple channels, which in turn strengthens memory retrieval. In the study we employ eye tracking methodology to examine the recognition of previously seen film material, testing whether audio description promotes recognition- rather than elimination-based decision-making in the visual recognition task. The analysis of first fixation duration and first run fixation count measures in the experimental and control groups partially confirmed our hypotheses. Children in the experimental group generally looked longer at the scenes they had seen, which supports the hypothesis that their decision was based on recognition, whereas children in the control group had longer fixations on scenes they were unfamiliar with, suggesting a decision based on elimination. Title Socio-spatial context and the habit-goal interface in audiovisual media consumption: an inter-paradigmatic approach Abstract This paper addresses the role of socio-spatial context on audiovisual media consumption by adopting a multi-paradigmatic approach that combines the Theory of Media Attendance, a socio-cognitive interpretation of Uses & Gratifications and Domestication Theory. We propose a framework that inquires (RQ 1) how goals and habits interface with each other as explanatory factors of consumption and (RQ 2) how the role of socio-spatial cues can be understood. Survey results show that different socio-spatial settings are associated with distinct explanations by goals and habits. Moreover, follow-up interviews indicate that these differences are best understood when framed in everyday life family dynamics. Title Re-thinking app design processes: applying established psychological principles to promote behaviour change - a case study from the domain of dynamic personalized travel planning Abstract In this paper, the authors outline one scenario for the application of 'next generation' personalized travel planning to the context of the 'home to school' run. Specifically, we offer a vision for the design of a 'next generation' school walking bus facilitated through a customized mobile phone app called 'Sixth Sense Travel'. The design of the app is informed by perspectives in behavioural science - first, by adoption of scientifically established techniques of behavioural change. Second, by applying outcomes from research on individual time 'typologies' to the usability of the interface. The value of rethinking the app design process to incorporate psychological principles is with a view to facilitating a shift in mode, by increasing the proportion of children who uptake active transportation to primary school. Title How can spreaders affect the indirect infuence on twitter? Abstract Most studies on social influence have focused on direct influence, while another interesting question can be raised as whether indirect influence exists between two users who're not directly connected in the network and what affects such influence. In addition, the theory of complex contagion tells us that more spreaders will enhance the indirect influence between two users. Our observation of intensity of indirect influence, propagated by n parallel spreaders and quantified by retweeting probability on Twitter , shows that complex contagion is validated globally but is violated locally. In other words, the retweeting probability increases non-monotonically with some local drops. CCS Applied computing Law, social and behavioral sciences Economics Title Marketing campaign evaluation in targeted display advertising Abstract In this paper, we develop an experimental analysis to estimate the causal effect of online marketing campaigns as a whole, and not just the media ad design. We analyze the causal effects on user conversion probability. We run experiments based on A/B testing to perform this evaluation. We also estimate the causal effect of the media ad design given this randomization approach. We discuss the framework of a marketing campaign in the context of targeted display advertising, and incorporate the main elements of this framework in the evaluation. We consider budget constraints, the auction process, and the targeting engine in the analysis and the experimental set up. For the effects of this evaluation, we assume the targeting engine to be a black box that incorporates the impression delivery policy, the budget constraints, and the bidding process. Our method to disaggregate the campaign causal analysis is inspired on randomized experiments with imperfect compliance and the intention-to-treat (ITT) analysis. In this framework, individuals assigned randomly to the study group might refuse to take the treatment. For estimation, we present a Bayesian approach and provide credible intervals for the causal estimates. We analyze the effects of 2 independent campaigns for different products from the Title Measuring dynamic effects of display advertising in the absence of user tracking information Abstract In this paper, we develop a time series approach, based on Dynamic Linear Models (DLM), to estimate the impact of ad impressions on the daily number of commercial actions when no user tracking is possible. The proposed method uses aggregate data, and hence it is simple to implement without expensive infrastructure. Specifically, we model the impact of daily number of ad impressions on daily number of commercial actions. We incorporate persistence of campaign effects on actions assuming a decay factor. We relax the assumption of a linear impact of ads on actions using the logtransformation. We also account for outliers with long-tailed distributions fitted and estimated automatically without a pre-defined threshold. This is applied to observational data post-campaign and does not require an experimental set-up. We apply the method to data from the Title Transaction risk management in China-US trade e-markets Abstract Trust is the central factor in market reputation, and has a huge impact on customer traffic and willingness to buy.. In a free-market, the desirable course for resolving disputes which would undermine trust is through risk products (insurance) and markets which allow hedging and savings to offset adverse events. At the individual level, these are typically insurance markets. This paper proposes an insurance product and business model to provide an effective solution to disputes over individual transactions in Internet markets. Title A Model for Information Growth in Collective Wisdom Processes Abstract Collaborative media such as wikis have become enormously successful venues for information creation. Articles accrue information through the asynchronous editing of users who arrive both seeking information and possibly able to contribute information. Most articles stabilize to high-quality, trusted sources of information representing the collective wisdom of all the users who edited the article. We propose a model for Title Two-sided search with experts Abstract In this paper we study distributed agent matching in environments characterized by uncertain signals, costly exploration, and the presence of an information broker. Each agent receives information about the potential value of matching with others. This information signal may, however be noisy, and the agent incurs some cost in receiving it. If all candidate agents agree to the matching the team is formed and each agent receives the true unknown utility of the matching, and leaves the market. We consider the effect of the presence of information brokers, or experts, on the outcomes of such matching processes. Experts can, upon payment of a fee, perform the service of disambiguating noisy signals and revealing the true value of a match to any agent. We analyze equilibrium behavior given the fee set by a monopolist expert and use this analysis to derive the revenue maximizing strategy for the expert as the first mover in a Stackelberg game. Surprisingly, we find that better information can hurt: the presence of the expert, even if the use of its services is optional, can degrade both individual agents' utilities and overall social welfare. While in one-sided search the presence of the expert can only help, in two-sided (and general Title Mechanism design on discrete lines and cycles Abstract We study strategyproof (SP) mechanisms for the location of a facility on a discrete graph. We give a full characterization of SP mechanisms on lines and on sufficiently large cycles. Interestingly, the characterization deviates from the one given by Schummer and Vohra (2004) for the continuous case. In particular, it is shown that an SP mechanism on a cycle is close to dictatorial, but all agents can affect the outcome, in contrast to the continuous case. Our characterization is also used to derive a lower bound on the approximation ratio with respect to the social cost that can be achieved by an SP mechanism on certain graphs. Finally, we show how the representation of such graphs as subsets of the binary cube reveals common properties of SP mechanisms and enables one to extend the lower bound to related domains. Title How to schedule a cascade in an arbitrary graph Abstract When individuals in a social network make decisions that depend on what others have done earlier, there is the potential for a Here we formulate the problem of ordering the nodes in a cascade to maximize the expected number of "favorable" decisions --- those that support a given option. We provide an algorithm that ensures an expected linear number of favorable decisions in any graph, and we show that the performance bounds for our algorithm are essentially the best achievable assuming P ≠ NP. Title The groupon effect on yelp ratings: a root cause analysis Abstract Daily deals sites such as Groupon offer deeply discounted goods and services to tens of millions of customers through geographically targeted daily e-mail marketing campaigns. In our prior work we observed that a negative side effect for merchants selling Groupons is that, on average, their Yelp ratings decline significantly. However, this previous work was primarily observational, rather than explanatory. In this work, we rigorously consider and evaluate various hypotheses about underlying consumer and merchant behavior in order to understand this phenomenon, which we dub the Groupon effect. We use statistical analysis and mathematical modeling, leveraging a dataset we collected spanning tens of thousands of daily deals and over 7 million Yelp reviews. We investigate hypotheses such as whether Groupon subscribers are more critical than their peers, whether Groupon users are experimenting with services and merchants outside their usual sphere, or whether some fraction of Groupon merchants provide significantly worse service to customers using Groupons. We suggest an additional novel hypothesis: reviews from Groupon users are lower on average because such reviews correspond to real, unbiased customers, while the body of reviews on Yelp contain some fraction of reviews from biased or even potentially fake sources. Although our focus is quite specific, our work provides broader insights into both consumer and merchant behavior within the daily deals marketplace. Title Bayesian optimal auctions via multi- to single-agent reduction Abstract We study an abstract optimal auction problem for selecting a subset of self-interested agents to whom to provide a service. A feasibility constraint governs which subsets can be simultaneously served; however, the mechanism may additionally choose to bundle unconstrained attributes such as payments or add-ons with the service. An agent's preference over service and attributes is given by her private type and may be multi-dimensional and non-linear. A single-agent problem is to optimizes a menu to offer an agent subject to constraints on the probabilities with which each of the agent's types is served. We give computationally tractable reductions from multi-agent auction problems to these single-agent problems. Our discussion focuses on maximizing revenue, but our results can be applied to other objectives (e.g., welfare). From each agent's perspective, any multi-agent mechanism and distribution on other agent types induces an interim allocation rule, i.e., a probability that the agent will be served as a function of the type she reports. The resulting profile of interim allocation rules (one for each agent) is feasible in the sense that there is a mechanism that induces it. An optimal mechanism can be solved for by inverting this process. Solve the single-agent problem of finding the optimal way to serve an agent subject to an interim allocation rule as a constraint (taking into account the agent's incentives). Optimize over interim feasible allocation profiles, i.e., ones that are induced by the type distribution and some mechanism, the cumulative revenue of the single-agent problems for the interim allocation profile. Find the mechanism that induces the optimal interim feasible allocation profile in the previous step. For a large class of auction problems and multi-dimensional and non-linear preferences each of the above steps is computationally tractable. We observe that the single-agent problems for (multi-dimensional) unit-demand and budgeted preferences can be solved in polynomial time in the size of the agent's type space via a simple linear program. For feasibility constraints induced by single-item auctions interim feasibility was characterized by Border (1991); we show that interim feasible allocation rules can be optimized over and implemented via a linear program that has quadratic complexity in the sum of the sizes of the individual agents' type spaces. We generalize Border's characterization to auctions where feasible outcomes are the independent sets of a matroid; here the characterization implies that the polytope of interim feasible allocation rules is a polymatroid. This connection implies that a concave objective, such as the revenue from the single-agent problems, can be maximized over the interim feasibility constraint in polynomial time. The resulting optimal mechanism can be viewed as a randomization over the vertices of the polymatroid which correspond to simple greedy mechanisms: given an ordering on a subset all agent types, serve the agents in this order subject to feasibility. Related work: Cai, Daskalakis, and Weinberg (2012) solve (independently) the single-item interim feasibility problem and use this solution to give tractable revenue-optimal multi-item auctions for agents with linear utilities. Title Approximate revenue maximization with multiple items Abstract Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible multi-item setting: two items and a single buyer who has independently distributed values for the items, and an additive valuation. In general, the revenue achievable from selling two independent items may be strictly higher than the sum of the revenues obtainable by selling each of them separately. In fact, the structure of optimal (i.e., revenue-maximizing) mechanisms for two items even in this simple setting is not understood. In this paper we obtain approximate revenue optimization results using two simple auctions: that of selling the items separately, and that of selling them as a single bundle. Our main results (which are of a "direct sum" variety, and apply to any distributions) are as follows. Selling the items separately guarantees at least half the revenue of the optimal auction; for identically distributed items, this becomes at least 73% of the optimal revenue. For the case of k > 2 items, we show that selling separately guarantees at least a c/log2(k) fraction of the optimal revenue; for identically distributed items, the bundling auction yields at least a c/log(k) fraction of the optimal revenue. CCS Applied computing Law, social and behavioral sciences Sociology Title Analysis of an online health social network Abstract With the continued advances of Web 2.0, health-centered Online Social Networks (OSNs) are emerging to provide knowledge and support for those interested in managing their own health. Despite the success of the OSNs for better connecting the users through sharing statuses, photos, blogs, and so on, it is unclear how the users are willing to share health related information and whether these specialpurpose OSNs can actually change the users' health behaviors to become more healthy. This paper provides an empirical analysis of a health OSN, which allows its users to record their foods and exercises, to track their diet progress towards weight-change goals, and to socialize and group with each other for community support. Based on about five month data collected from more than 107,000 users, we studied their weigh-in behaviors and tracked their weight-change progress. We found that the users' weight changes correlated positively with the number of their friends and their friends' weight-change performance. We also show that the users' weight changes have rippling effects in the OSN due to the social influence. The strength of such online influence and its propagation distance appear to be greater than those in the real-world social network. To the best of our knowledge, this is the first detailed study of a large-scale modern health OSN. Title Panel: implications of social computing in health informatics Abstract Events occur daily that challenge the health, security and sustainable growth of our society, and often find us unprepared for the catastrophic outcomes. These events involve the interaction of complex processes such as climate change, emerging infectious diseases, energy reliability, terrorism, nuclear proliferation, natural and man-made disasters, and geopolitical, social and economic vulnerabilities. If we are to prevent the adversities and leverage the opportunities that emerge from these events, integrated anticipatory reasoning has to become an everyday activity. There is increased awareness among subject-matter experts, analysts, and decision makers that a combined understanding of interacting physical and human factors is essential in addressing strategic decision making proactively. For example, multilevel studies that consider a broad range of biological, family, community, socio-cultural, environmental, policy, and macro-level economic factors provide an ideal systems science approach for public health informatics with reference to challenges such as emerging infectious diseases. Existing health modeling paradigms can be supported through the integration of socio-cultural models. This enables a more accurate characterization of how public health responses reflect factors such as social order, individual and community-based health knowledge, and behavior change due to public health communications in mass media and emerging Web technologies. Title Crowd-sourced cartography: measuring socio-cognitive distance for urban areas based on crowd's movement Abstract On behalf of the rapid urbanization, urban areas are gradually becoming a sophisticated space where we often need to know ever evolving features to take the most of the space. Therefore, keeping up with the dynamic change of urban space would be necessary, while it usually requires lots of efforts to understand newly visiting and daily changing living spaces. In order to explore and exploit the urban complexity from crowd-sourced lifelogs, we focus on location-based social network sites. In fact, due to the proliferation of location-based social networks, we can easily acquire massive crowd-sourced lifelogs interestingly indicating their experiences in the real space. In particular, we can conduct various novel urban analytics by monitoring crowd's experiences in an unprecedented way. In this paper, we particularly attempt to exploit crowd-sourced location-based lifelogs for generating a socio-cognitive map, whose purpose is to deliver much simplified and intuitive perspective of urban space. For the purpose, we measure socio-cognitive distance among urban clusters based on human mobility to represent accessibility of urban areas based on crowd's movement. Finally, we generate a socio-cognitive map reflecting the proposed socio-cognitive distances which have computed with massive geo-tagged tweets from Twitter. Title An implementation of secure two-party computation for smartphones with application to privacy-preserving interest-cast Abstract For this demo, we present an implementation of the FairPlay framework for secure two-party function computation on Android smartphones, which we call MobileFairPlay. MobileFairPlay allows high-level programming of several secure two-party protocols, including protocols for the Millionaire problem, set intersection, etc. All these functions are useful in the context of mobile social networks and opportunistic networks, where parties are often requested to exchange sensitive information (list of contacts, interest profiles, etc.) to optimise network operation. Title Opinion influence and diffusion in social network Abstract Nowadays, more and more people tend to make decisions based on the opinion information from the Internet, in addition to recommendations from offline friends or parents. For example, we may browse the resumes and comments on election candidates to determine if one candidate is qualified, or consult the consumer reports or reviews on special e-commercial websites to decide which brand of computer is suitable for one's needs. Though opinion information is rich on the Internet, [2] points out that 58% of American Internet users deem that online information is irretrievable, confusing, or conflicting with each other. Early works on opinion mining help to classify opinion polarity, to extract specific opinions and to summarize opinion texts. However, all these works are usually based on plain texts (reviews, comments or news articles). With the explosion of Web 2.0 applications, especially social network applications like blogs, discussion forums, micro-blogs, the massive individual users go to the major media websites, which leads to much more opinion materials posted on the Internet by user-shared experiences or views [3]. These opinion-rich and social network-based applications bring new perspectives for opinion mining as well. First, in addition to plain texts (reviews, newswire) in traditional opinion mining, we see new types of cyber-based text, like personal diary blogs, cyber-SMS tweets. Second, if we regard the opinions in plain text as static, the dynamic change of opinions in the social network is a new promising area, and catch increasing attention of worldwide researchers. In the social network, the opinion held by one individual is not static, but changes, which can be influenced by others. A serial of changes among different users forms the opinion propagation or diffusion in the network. This paper and my doctoral work focus on the opinion influence and diffusion in the social network, which explore the detailed process of one-to-one influence and the opinion diffusion process in the social network. The significance of this work is it can benefit many other related researches, like information maximum, viral marketing. Now some pioneering works have been conducted to investigate the role of social networks in information diffusion and influencers in the social network. These works are usually based on information diffusion models, like the cascade model (CM) or epidemic model (EM). However, we argue that it is not enough to simply apply these models to opinion influence and diffusion. 1) For both CM and EM, status shift is along specific directions, from inactive to active (CM) or from susceptible to infectious, and then, to recovered (EM). But opinion influence is more complex. Title Opinion interaction network: opinion dynamics in social networks with heterogeneous relationships Abstract Recent empirical studies have discovered that many social networks have heterogeneous relationships, which are signed and weighted relationships between individual nodes. To explore the pattern of opinion dynamics in diverse social networks with heterogeneous relationships, we set up a general agent-based simulation framework named opinion interaction network (OIN), and propose a novel model of opinion dynamics, in which the influence of agents depends on their heterogeneous relationships. Then, by conducting a series of simulations based on OIN, we find that the opinions at steady state depend on the degree of social harmoniousness and average connectivity, and the similar pattern can be observed in the network of Erdös_Rényi, small world and scale free, which illustrates that the topological properties such as short path length, high clustering, and heterogeneous degrees have few effects on opinion dynamics with heterogeneous relationships. Title Characterizing large-scale population's indoor spatio-temporal interactive behaviors Abstract Human activity behaviors in urban areas mostly occur in interior places, such as department stores, office buildings, and museums. Understanding and characterizing human spatio-temporal interactive behaviors in these indoor areas can help us evaluate the efficiency of social contacts, monitor the frequently asymptomatic diseases transmissions, and design better internal structures of buildings. In this paper, we propose a new temporal quantity: Title Second screen applications and tablet users: constellation, awareness, experience, and interest Abstract This study investigates how tablet users incorporate multiple media in their television viewing experience. Three patterns are found: (a) only focusing on television, (b) confounding television viewing with other screen media (e.g. laptop, tablet) and (c) confounding television viewing with various media, including print and screen media. Furthermore, we question how the incorporation of screen media in this experience affects the practice of engaging in digital commentary on television content. Also, we inquire the uptake and interest in so-called 'second screen applications'. These applications allow extensions of the primary screen experience on secondary screens (e.g. tablet). The results, based on a sample of 260 tablet users, indicate that there is only a modest uptake and interest in using secondary screens to digitally share opinions. However, the use of second screen interaction with television content is not discarded: although there is still little awareness and experience, we notice a moderate interest in these apps. Title The composition and role of convergent technology repertoires in audiovisual media consumption Abstract This paper addresses the role of socio-spatial context on audiovisual media consumption by adopting a multi-paradigmatic approach that combines the Theory of Media Attendance, a socio-cognitive interpretation of Uses & Gratifications and Domestication Theory. We propose a framework that inquires (RQ 1) how goals and habits interface with each other as explanatory factors of consumption and (RQ 2) how the role of socio-spatial cues can be understood. Survey results show that different socio-spatial settings are associated with distinct explanations by goals and habits. Moreover, follow-up interviews indicate that these differences are best understood when framed in everyday life family dynamics. Title Everyday life in (front of) the screen: the consumption of multiple screen technologies in the living room context Abstract Today's (home) media environment is becoming increasingly saturated. Smartphones, tablets and laptops enter our living room and possibly alter our television viewing experience. In this paper, we want to grasp how 'screen technologies' are interrelated on a textual (content) and a material level (technological object), from a user perspective. By means of domestic in-depth interviews with owners of multiple screen technologies, we interpret the integration of multiple secondary screens in the everyday television viewing behavior. In most cases, the use of second screens is not related to television content. Nonetheless, we also found evidence of changing dynamics concerning public and private spaces, as people extend television text on their second screens into online social spaces or more generally, the Internet. These interactive structures provide individuals with opportunities as well as threats. In conclusion, we identify directions for future research on the consumption and reception of television. CCS Applied computing Computer forensics Surveillance mechanisms CCS Applied computing Computer forensics Investigation techniques CCS Applied computing Computer forensics Evidence collection, storage and analysis CCS Applied computing Computer forensics Network forensics CCS Applied computing Computer forensics System forensics CCS Applied computing Computer forensics Data recovery CCS Applied computing Arts and humanities Fine arts Title Acqua vellutata sospesa: interactive video painting Abstract In this paper I present the interactive video painting artwork " Title Sonify your face: facial expressions for sound generation Abstract We present a novel visual creativity tool that automatically recognizes facial expressions and tracks facial muscle movements in real time to produce sounds. The facial expression recognition module detects and tracks a face and outputs a feature vector of motions of specific locations in the face. The feature vector is used as input to a Bayesian network which classifies facial expressions into several categories (e.g., angry, disgusted, happy, etc.). The classification results are used along with the feature vector to generate a combination of sounds that change in real time depending on the person's facial expressions. We explain the artistic motivation behind the work, the basic components of our tool, and possible applications in the arts (performance, installation) and in the medical domain. Finally, we report on the experience of approximately 25 users of our system at a conference demonstration session, of 9 participants in a pilot study to assess the system's usability, and discuss our experience installing the work at an important digital arts festival (RE-NEW 2009). Title Ozone: continuous state-based media choreography system for live performance Abstract This paper describes Ozone, a new Title Building with a memory: responsive color interventions Abstract Building with a Memory is a subtle responsive intervention that aims to provide cohesion and community awareness through the use of light and color. The installation delivers thought-provoking information by capturing, analyzing and rendering real-time and archived human activity in a workplace setting. The installation senses movement in the space through an IR camera and computer vision techniques. Two custom lighting fixtures and a video monitor render the aggregated movements. The visually simple aesthetic of the piece aims to balance active engagement and passive contribution, providing a rewarding experience for both occasional passersby and regular users of the space. This paper describes the motivations and contributions of the installation, together with insights gained from an informal evaluation and directions for future explorations. Title Encounter (resonances) Abstract This work is about the remediation of one of Mark Rothko's Seagram murals through the composition of several online sources and additional digital rendering. Based on reproductions of Rothko's "Red on Maroon" found on the Internet, and using computer graphics compositing associated with moiré and specular lighting effects, "Encounter (Resonances)" offers a new approach to the presentation of a piece of work that allows a viewer to perceive some of its very subtle nuances. The work echoes Rothko's mixed media layered painting technique by using reproductions of various color palettes and resolutions as metaphors for the layers of paint in his original works. While each of these copies may instantly remind us of the original work, the graphical rendering of "Encounter (Resonances)" combines them at three levels of representation (global shape, micro and macro structure), in an effort to encourage a level of prolonged engagement and gradual discovery in the artwork. Title HUM, an interactive and collaborative art installation Abstract This paper describes Title An internet of cars: connecting the flow of things to people, artefacts, environments and businesses Abstract In this paper, the authors introduce a creative approach to conceiving cars as data packets through the use of their license registration plate and offering a playful platform that allows users to engage with them as though they were part of social media. The paper introduces the concept of the Internet of Things and suggests that a barrier exists that is preventing the general public from conceiving cars as being part of a similar network. The authors identify similarities between existing tagging technologies that support objects to be tracked through the internet but highlight the apparent oversight of cars to offer the same capabilities. The authors present a vision for a platform that leverages the unique identifying properties of car registration plates and introduces a cultural project in which people will be able to 'play' with cars as they might data through games, messaging services, and visualisations. Title Acting lesson with robot: emotional gestures Abstract In this video, real-life acting professor Matthew Gray tutors Data the Robot (a Nao model) to improve his expression of emotion via Chekhov's Psychological Gestures. Though the video narrative is fictional and the robot actions pre-programmed, the aim of the dramatization is to introduce an acting methodology that social robots could use to leverage full body affect expressions. The video begins with Gray leading Nao in traditional human actor warm-up exercises. Next, Gray shows Data a video of his students practicing Chekhov psychological gestures [4] [11]. Finally, Data tries out some 'push' gestures himself. By pairing the 'push' gesture with text, the viewer is intended to unconsciously associate the words with an outpouring of emotion. Finally, Data's programmer, Knight, arrives to pick up the robot from his lesson, "until next time". This video playfully introduces full-body emotional gestures. The benefit of such movement-based full-body expressions is that they do not necessarily require a robot to have human-like facial expressions nor humanoid form to be effective (though the interplay of psychological gesture with multi-modal expressions could provide fertile terrain for future research). Instead, these full-body motions are translations of an actor's motive/intent that suffuse the whole form (e.g. expansion, sluggishness, lightness). We note that there are various schools of physical theater dedicated to understanding movement [5]. Related investigations in the robotics world that have applied acting method or practice to social robot design or architecture also include [2][3][6][7][8][9][10]. As Blaire writes about in her text on acting and neuroscience [1], the discovery of mirror neurons in our brain have led some dramaturges to theorize that audience members simulate the gestures of the performers through their own neural circuitry for interpretation. If so, full body gestures may be able to tap into our emotional experience in a uniquely human way. We hope this will be the first of several spirited demonstration videos that explore intersections wherein human acting methodologies might benefit the development of robot non-verbal expressions. Title Abstract rendering of human activity in a dynamic distributed learning environment Abstract Contemporary distributed enterprises present challenges in terms of demonstrating community activity awareness and coherence across individuals and teams in collaborating networks. Building with a Memory is an experiential media system that captures and represents human activity in a distributed workplace over time. The system senses and analyzes movement in two workspaces in a mixed-use building with the results rendered in an informative ambient display in the building entryway. We describe the design and development of the system, together with insights from two studies of the installation and promising future directions. Title ACM multimedia interactive art program: interaction stations Abstract The Interaction Stations Exhibit features screen-based, interactive works that align with the new conference themes, and integrate into the physical setting of the conference center. In this paper we describe our motivation for this format, as well as the works selected and larger connections. CCS Applied computing Arts and humanities Performing arts Title Acqua vellutata sospesa: interactive video painting Abstract In this paper I present the interactive video painting artwork " Title Sonify your face: facial expressions for sound generation Abstract We present a novel visual creativity tool that automatically recognizes facial expressions and tracks facial muscle movements in real time to produce sounds. The facial expression recognition module detects and tracks a face and outputs a feature vector of motions of specific locations in the face. The feature vector is used as input to a Bayesian network which classifies facial expressions into several categories (e.g., angry, disgusted, happy, etc.). The classification results are used along with the feature vector to generate a combination of sounds that change in real time depending on the person's facial expressions. We explain the artistic motivation behind the work, the basic components of our tool, and possible applications in the arts (performance, installation) and in the medical domain. Finally, we report on the experience of approximately 25 users of our system at a conference demonstration session, of 9 participants in a pilot study to assess the system's usability, and discuss our experience installing the work at an important digital arts festival (RE-NEW 2009). Title Ozone: continuous state-based media choreography system for live performance Abstract This paper describes Ozone, a new Title Building with a memory: responsive color interventions Abstract Building with a Memory is a subtle responsive intervention that aims to provide cohesion and community awareness through the use of light and color. The installation delivers thought-provoking information by capturing, analyzing and rendering real-time and archived human activity in a workplace setting. The installation senses movement in the space through an IR camera and computer vision techniques. Two custom lighting fixtures and a video monitor render the aggregated movements. The visually simple aesthetic of the piece aims to balance active engagement and passive contribution, providing a rewarding experience for both occasional passersby and regular users of the space. This paper describes the motivations and contributions of the installation, together with insights gained from an informal evaluation and directions for future explorations. Title The rumentarium project Abstract The paper describes the design, production and usage of the "Rumentarium", a computer-based sound generating system involving physical objects as sound sources. The Rumentarium is a set of handmade resonators, acoustically excited by DC motors, interfaced to a computer by four microcontrollers. Following an ecological/anthropological perspective, in the Rumentarium discarded materials are used as sound sources. While entirely computationally-controlled, the Rumentarium is an acoustic sound generator. The paper provides a general description of the Rumentarium and discusses some artistic applications. Title HUM, an interactive and collaborative art installation Abstract This paper describes Title Encounter (resonances) Abstract This work is about the remediation of one of Mark Rothko's Seagram murals through the composition of several online sources and additional digital rendering. Based on reproductions of Rothko's "Red on Maroon" found on the Internet, and using computer graphics compositing associated with moiré and specular lighting effects, "Encounter (Resonances)" offers a new approach to the presentation of a piece of work that allows a viewer to perceive some of its very subtle nuances. The work echoes Rothko's mixed media layered painting technique by using reproductions of various color palettes and resolutions as metaphors for the layers of paint in his original works. While each of these copies may instantly remind us of the original work, the graphical rendering of "Encounter (Resonances)" combines them at three levels of representation (global shape, micro and macro structure), in an effort to encourage a level of prolonged engagement and gradual discovery in the artwork. Title An internet of cars: connecting the flow of things to people, artefacts, environments and businesses Abstract In this paper, the authors introduce a creative approach to conceiving cars as data packets through the use of their license registration plate and offering a playful platform that allows users to engage with them as though they were part of social media. The paper introduces the concept of the Internet of Things and suggests that a barrier exists that is preventing the general public from conceiving cars as being part of a similar network. The authors identify similarities between existing tagging technologies that support objects to be tracked through the internet but highlight the apparent oversight of cars to offer the same capabilities. The authors present a vision for a platform that leverages the unique identifying properties of car registration plates and introduces a cultural project in which people will be able to 'play' with cars as they might data through games, messaging services, and visualisations. Title Acting lesson with robot: emotional gestures Abstract In this video, real-life acting professor Matthew Gray tutors Data the Robot (a Nao model) to improve his expression of emotion via Chekhov's Psychological Gestures. Though the video narrative is fictional and the robot actions pre-programmed, the aim of the dramatization is to introduce an acting methodology that social robots could use to leverage full body affect expressions. The video begins with Gray leading Nao in traditional human actor warm-up exercises. Next, Gray shows Data a video of his students practicing Chekhov psychological gestures [4] [11]. Finally, Data tries out some 'push' gestures himself. By pairing the 'push' gesture with text, the viewer is intended to unconsciously associate the words with an outpouring of emotion. Finally, Data's programmer, Knight, arrives to pick up the robot from his lesson, "until next time". This video playfully introduces full-body emotional gestures. The benefit of such movement-based full-body expressions is that they do not necessarily require a robot to have human-like facial expressions nor humanoid form to be effective (though the interplay of psychological gesture with multi-modal expressions could provide fertile terrain for future research). Instead, these full-body motions are translations of an actor's motive/intent that suffuse the whole form (e.g. expansion, sluggishness, lightness). We note that there are various schools of physical theater dedicated to understanding movement [5]. Related investigations in the robotics world that have applied acting method or practice to social robot design or architecture also include [2][3][6][7][8][9][10]. As Blaire writes about in her text on acting and neuroscience [1], the discovery of mirror neurons in our brain have led some dramaturges to theorize that audience members simulate the gestures of the performers through their own neural circuitry for interpretation. If so, full body gestures may be able to tap into our emotional experience in a uniquely human way. We hope this will be the first of several spirited demonstration videos that explore intersections wherein human acting methodologies might benefit the development of robot non-verbal expressions. Title Abstract rendering of human activity in a dynamic distributed learning environment Abstract Contemporary distributed enterprises present challenges in terms of demonstrating community activity awareness and coherence across individuals and teams in collaborating networks. Building with a Memory is an experiential media system that captures and represents human activity in a distributed workplace over time. The system senses and analyzes movement in two workspaces in a mixed-use building with the results rendered in an informative ambient display in the building entryway. We describe the design and development of the system, together with insights from two studies of the installation and promising future directions. CCS Applied computing Arts and humanities Architecture (buildings) CCS Applied computing Arts and humanities Language translation Title A framework for retrieval and annotation in digital humanities using XQuery full text and update in BaseX Abstract A key difference between traditional humanities research and the emerging field of digital humanities is that the latter aims to complement qualitative methods with quantitative data. In linguistics, this means the use of large corpora of text, which are usually annotated automatically using natural language processing tools. However, these tools do not exist for historical texts, so scholars have to work with unannotated data. We have developed a system for systematic, iterative exploration and annotation of historical text corpora, which relies on an XML database (BaseX) and in particular on the Full Text and Update facilities of XQuery. Title A phonetic approach to handling spelling variations in medieval documents Abstract Spelling variations pose a critical obstacle in the study, comprehension, and translation of medieval manuscripts. In this short paper we describe a new process and tool, POM (Phonetic Orthography Mapper), we developed to map spelling variations to standard German orthography used today. The tool is based on phonetic analysis and machine learning techniques. POM was applied to more than 20,000 digitalized German medieval manuscripts and we were able to correctly map more than 60,000 spelling variations. The research described in this short paper is part of a larger interdisciplinary project in the "digital humanities", particularly about software-tool support for the handling of historic central-European documents written in medieval German dialects. Title The Readers Project: procedural agents and literary vectors Abstract is an aesthetically oriented system of software entities designed to explore the culture of human reading. These entities, or "readers," navigate texts according to specific reading strategies based upon linguistic feature analysis and real-time probability models harvested from search engines. As such, they function as autonomous text generators, writing machines that become visible within and beyond the typographic dimension of the texts on which they operate. Thus far the authors have deployed the system in a number of interactive art installations at which audience members can view the aggregate behavior of the readers on a large screen display and also subscribe, via mobile device, to individual reader outputs. As the structures on which these readers operate are culturally and aesthetically implicated, they shed critical light on a range of institutional practices -- particularly those of reading and writing -- and explore what it means to engage with the literary in digital media. Title Incremental compilation of knowledge documents for markup-based closed-world authoring Abstract Text-based authoring using knowledge markups is an increasingly popular editing paradigm in manual knowledge acquisition. Closed world authoring environments support the user to form a coherent knowledge base by checking the referenced objects against a set of declared domain objects. In this scenario, the task of efficient translation (compilation) of the text sources is non-trivial. Additionally, in real-world applications frequent small changes are performed on the source documents and instant feedback to the author is crucial. Therefore, a scalable compilation into the target knowledge representations is necessary. In this paper, we introduce a general algorithm for the incremental compilation of knowledge documents, that analyzes the current document modifications and performs minimal updates on the knowledge base. We provide a formal proof of the correctness of the algorithm and show the effectiveness of the approach in several case studies, using various kinds of knowledge representations and markups. Title Phonetic database for automated generating of logopedic exercises Abstract The paper presents method, dynamic data base (DB) and Web-based system for generating and managing logopedic phonetic exercises. When creating a record in DB, some particular fields are automated filled in using linguistic and semantic technologies. Phonetic exercises of a distinct type are generated by means of a common query to DB and indicating an appropriate attributes in terms, used in the practice to compose similar exercises. Title Large multimedia archive for world languages Abstract In this paper, we describe the core pillars of a large archive of language material recorded worldwide partly about languages that are highly endangered. The bases for the documentation of these languages are audio/video recordings which are then annotated at several linguistic layers. The digital age completely changed the requirements of long-term preservation and it is discussed how the archive met these new challenges. An extensive solution for data replication has been worked out to guarantee bit-stream preservation. Due to an immediate conversion of the incoming data to standards-based formats and checks at upload time lifecycle management of all 50 Terabyte of data is widely simplified. A suitable metadata framework not only allowing users to describe and discover resources, but also allowing them to organize their resources is enabling the management of this amount of resources very efficiently. Finally, it is the Language Archiving Technology software suite which allows users to create, manipulate, access and enrich all archived resources given that they have access permissions. Title Usability and use of SLS: caption Abstract SLS:Caption provides captioning functionality for deaf and hearing users to provide captions to video content (including sign language content). Users are able to enter and modify text as well as adjust its font, colour, location and background opacity. An initial user study with hearing users showed that SLS:Caption was easy to learn and use. However, users seem reluctant to produce captions for their own video material; this was likely due to the task complexity and time required to create captions regardless of the usability of the captioning tool. Title Translating politeness across cultures: case of Hindi and English Abstract In this paper, we present a corpus based study of politeness across two languages-English and Hindi. It studies the politeness in a translated parallel corpus of Hindi and English and sees how politeness in a Hindi text is translated into English. We provide a detailed theoretical background in which the comparison is carried out, followed by a brief description of the translated data within this theoretical model. Since politeness may become one of the major reasons of conflict and misunderstanding, it is a very important phenomenon to be studied and understood cross-culturally, particularly for such purposes as machine translation. Title An information retrieval approach to spelling suggestion Abstract In this paper, we present a two-step language-independent spelling suggestion system. In the first step, candidate suggestions are generated using an Information Retrieval(IR) approach. In step two, candidate suggestions are re-ranked using a new string similarity measure that uses the length of the longest common substrings occurring at the beginning and end of the words. We obtained very impressive results by reranking candidate suggestions using the new similarity measure. The accuracy of first suggestion is 92.3%, 90.0% and 83.5% for Dutch, Danish and Bulgarian language datasets respectively. Title Mobile SMS to Braille transcription: a new era of mobile for the blinds Abstract In this paper, we describe an application that converts Mobile SMSs into Braille script with special emphasis on special symbols so that visually impaired people will also be able to read mobile messages. This application filters mobile message's text from their special format and than converts this message text into Braille script and send it to the parallel port so that it can be embossed on paper by Braille Embosser or can be read by electronic Braille reader attached to the parallel port. This application increases the availability of information and use of technology for handicapped--visually impaired individuals. CCS Applied computing Arts and humanities Media arts CCS Applied computing Arts and humanities Sound and music computing Title Coming together: composition by negotiation Abstract In this paper, we describe a software system that generates unique musical compositions in realtime, created by four autonomous multi-agents. Given no explicit musical data, agents explore their environment, building beliefs through interactions with other agents via messaging and listening (to both audio and/or MIDI data), generating goals, and executing plans. The artistic focus of Title Coming together: negotiated content by multi-agents Abstract In this paper, we describe a software system that generates unique musical compositions in realtime, created by four autonomous multi-agents. Given no explicit musical data, agents explore their environment, building beliefs through interactions with other agents via messaging and listening (to both audio and/or MIDI data), generating goals, and executing plans. The artistic focus of Coming Together is the actual process of convergence, heard during performance (each of which usually lasts about ten minutes): the movement from random individualism to converged ensemble interaction. If convergence is successful, four additional agents are instantiated that exploit the emergent harmony and rhythm through brief, but beautiful melodic gestures. Once these agents have completed their work, or if the original "explorer" agents fail to converge, the system resets itself, and the process begins again. Title Visualization of concurrent tones in music with colours Abstract Visualizing music in a meaningful and intuitive way is a challenge. Our aim is to visualize music by interconnecting similar aspects in music and in visual perception. We focus on visualizing harmonic relationships between tones and colours. Related existing visualizations map tones or keys into a discrete set of colours. As concurrent (simultaneous) tones are not perceived as entirely separate, but also as a whole, we present a novel method for visualizing a group of concurrent tones (limited to the pitches of the 12-tone chromatic scale) with one colour for the whole group. The basis for calculation of colour is the assignment of key spanning circle of thirds to the colour wheel. The resulting colour is not limited to discrete set of colours: similar tones, chords and keys have similar colour hue; dissonance and consonance are represented by low and high colour saturation respectively. The proposed method is demonstrated as part of our prototype music visualization system using extended 3-dimensional piano roll notation. Title MML 2010: international workshop on machine learning and music Abstract MML 2010, the International Workshop on Machine Learning and Music, continues a series of workshops related to artificial intelligence and machine learning in music. In this short article the Programme Chairs summarize the content of the workshop. Title Modeling concept dynamics for large scale music search Abstract Continuing advances in data storage and communication technologies have led to an explosive growth in digital music collections. To cope with their increasing scale, we need effective Music Information Retrieval (MIR) capabilities like tagging, concept search and clustering. Integral to MIR is a framework for modelling music documents and generating discriminative signatures for them. In this paper, we introduce a multimodal, layered learning framework called DMCM. Distinguished from the existing approaches that encode music as an ensemble of order-less feature vectors, our framework extracts from each music document a variety of acoustic features, and translates them into low-level encodings over the temporal dimension. From them, DMCM elucidates the concept dynamics in the music document, representing them with a novel music signature scheme called Stochastic Music Concept Histogram (SMCH) that captures the probability distribution over all the concepts. Experiment results with two large music collections confirm the advantages of the proposed framework over existing methods on various MIR tasks. Title What we talk about when we talk about co-creative tangibles Abstract We investigate ways in which emergent digital technologies embedded into "soft things" offer new possibilities for communication, collaboration and co-creation between people with and without disabilities. Specifically, we look at ways in which tangible, net based, multi-modal artifacts can be used for creating and sharing music and visuals. In this paper, we describe the notion of co-creative tangibles, and the ways in which different stakeholders talk about these "smart things". The three participating groups are children and their families, care givers and teachers, and researchers from different disciplines. We will specifically investigate Universal Design in the area of Tangible Interaction, and ways of talking about the "things" we develop, use and interact with. Title Parametric time-frequency representation of spatial sound in virtual worlds Abstract Directional audio coding (DirAC) is a parametric time-frequency domain method for processing spatial audio based on psychophysical assumptions and on energetic analysis of the sound field. Methods to use DirAC in spatial sound synthesis for virtual worlds are presented in this article. Formal listening tests are used to show that DirAC can be used to position and to control the spatial extent of virtual sound sources with good audio quality. It is also shown that DirAC can be used to generate reverberation for N-channel horizontal listening with only two monophonic reverberators without a prominent loss in quality when compared with quality obtained with N-channel reverberators. Title Creative access to technology: building sounding artifacts with children Abstract The research presented in this paper is driven by the idea that children can get unique access to technology through the construction of what one might want to call Title Designing soundscapes of virtual environments for crisis management training Abstract Since crisis management training requires extensive resources, we are offering a virtual environment which is meant to complement live exercise training. This paper presents a prototype of communication in noisy conditions that forms a basis for such training in a virtual environment. As a part of this development, we have designed and implemented communication metaphors, which are derived from empirical data and latest research in voice communication. Furthermore, this research proposes a taxonomy of sounds that is based on the relation between noise and the communication metaphors as an extension to existing soundscape taxonomies. The proposed communication metaphors and soundscape taxonomy are implemented in a prototype using an integration of a game engine and voice communications. Title Supervised dictionary learning for music genre classification Abstract This paper concerns the development of a music codebook for summarizing local feature descriptors computed over time. Comparing to a holistic representation, this text-like representation better captures the rich and time-varying information of music. We systematically compare a number of existing codebook generation techniques and also propose a new one that incorporates labeled data in the dictionary learning process. Several aspects of the encoding system such as local feature extraction and codeword encoding are also analyzed. Our result demonstrates the superiority of sparsity-enforced dictionary learning over conventional VQ-based or exemplar-based methods. With the new supervised dictionary learning algorithm and the optimal settings inferred from the performance study, we achieve state-of-the-art accuracy of music genre classification using just the log-power spectrogram as the local feature descriptor. The classification accuracies for benchmark datasets GTZAN and IS-MIR2004Genre are 84.7% and 90.8%, respectively. CCS Applied computing Computers in other domains Digital libraries and archives Title Uffizi touch®: a new experience with art Abstract Centrica (www.centrica.it) has developed Uffizi Touch®, Title Metadata visualization of scholarly search results: supporting exploration and discovery Abstract Studies of online search behaviour have found that searchers often face difficulties formulating queries and exploring the search results sets. These shortcomings may be especially problematic in digital libraries since library searchers employ a wide variety of information seeking methods (with varying degrees of support), and the corpus to be searched is often more complex than simple textual information. This paper presents Bow Tie Academic Search, an interactive Web-based academic library search interface aimed at supporting the strategic retrieval behaviour of searchers. In this system, a histogram of the most frequently used keywords in the top search results is provided, along with a compact visual encoding that represents document similarities based on the co-use of keywords. In addition, the list-based representation of the search results is enhanced with visual representations of citation information for each search result. A detailed view of this citation information is provided when a particular search result is selected. These tools are designed to provide visual and interactive support for query refinement, search results exploration, and citation navigation, making extensive use of the metadata provided by the underlying academic information retrieval system. Title Addressing the long tail in empirical research data management Abstract At present, efforts are being made to pick up research data as bibliographic artifacts for re-use, transparency and citation. When approaching research data management solutions, it is imperative to consider carefully how filed data can be retrieved and accessed again on the user side. In the field of economics, a large amount of research is based on empirical data, which is often combined from several sources such as data centers, affiliated institutes or self-conducted surveys. Respecting this practice, we motivate and elaborate on techniques for fine-grained referencing of data fragments as to avoid multiple copies of same data archived over and over again, which may result in questionable transparency and difficult curation tasks. In addition, machines should have a deeper understanding of the given data, so that high-quality services can be installed. The paper first discusses the challenges of data management for the management of research data as used in empirical research. We conclude a comparison of referencing and copying strategies and reflect on their implications respectively. As a result from this argumentation, we elaborate on a data representation model, which we further examine in regard to considerable extensions. A Generating Model is subsequently introduced to enable citation, transparency and re-use. Eventually, we close with the demonstration of an explorative prototype for data access and investigate a distance metric for assisting in finding similar data sets and evaluating existing compositions. Title Document and archive: editing the past Abstract Document engineering has a difficult task: to propose tools and methods to manipulate contents and make sense of them. This task is still harder when dealing with archive, insofar as document engineering has not only to provide tools for expressing sense but above all tools and methods to keep contents accessible in their integrity and intelligible according to their meaning. However, these objectives may be contradictory: access implies to transform contents to make them accessible through networks, tools and devices. Intelligibility may imply to adapt contents to the current state of knowledge and capacity of understanding. But, by doing that, can we still speak of authenticity, integrity, or even the identity of documents? Document engineering has provided powerful means to express meaning and to turn an intention into a semiotic expression. Document repurposing has become a usual way for exploiting libraries, archives, etc. By enabling to reuse a specific part of a given content, repurposing techniques allow to entirely renegotiate the meaning of this part by changing its context, its interactivity, in short the way people can consider this piece of content and interpret it. Put in this way, there could be an antinomy between archiving and document engineering. However, transforming document, editing content is an efficient way to keep them alive and compelling for people. Preserving contents does not consist in simply storing them but in actively transforming them to adapt them technically and keep them intelligible. Editing the past is then a new challenge, merging a content deontology with a document technology. This challenge implies to redefine some classical notions as authenticity and highlight the needs for new concepts and methods. Especially in a digital world, documents are permanently reconfigured by technical tools that produce variants, similar contents calling into question the usual definition the identity of documents. Editing the past calls for a new critics of variants. Title Structural and visual comparisons for web page archiving Abstract In this paper, we propose a Web page archiving system that combines state-of-the-art comparison methods based on the source codes of Web pages, with computer vision techniques. To detect whether successive versions of a Web page are similar or not, our system is based on: (1) a combination of structural and visual comparison methods embedded in a statistical discriminative model, (2) a visual similarity measure designed for Web pages that improves change detection, (3) a supervised feature selection method adapted to Web archiving. We train a Support Vector Machine model with vectors of similarity scores between successive versions of pages. The trained model then determines whether two versions, defined by their vector of similarity scores, are similar or not. Experiments on real archives validate our approach. Title DocExplore: overcoming cultural and physical barriers to access ancient documents Abstract In this paper, we describe DocExplore, an integrated software suite centered on the handling of digitized documents with an emphasis on ancient manuscripts. This software suite allows the augmentation and exploration of ancient documents of cultural interest. Specialists can add textual and multimedia data and metadata to digitized documents through a graphical interface that does not require technical knowledge. They are helped in this endeavor by sophisticated document analysis tools that allows for instance to spot words or patterns in images of documents. The suite is intended to ease considerably the process of bringing locked away historical materials to the attention of the general public by covering all the steps from managing a digital collection to creating interactive presentations suited for cultural exhibitions. Its genesis and sustained development reside in a collaboration of archivists, historians and computer scientists, the latter being not only in charge of the development of the software, but also of creating and incorporating novel pattern recognition for document analysis techniques. Title In search of a good novel, neither reading activity nor querying matter, but examining search results does Abstract Borrowing novels is a major activity in public libraries. However, the interest in developing tools for fiction searching and analyzing the use of these tools is minor. It is studied how tools provided by an enriched public library catalogue are used to access novels to read. 58 users searched for interesting novels to read in a simulated situation where they had only a vague idea of what they would like to read. Data consist of search logs, pre and post search questionnaires and observations. For analyzing associations between novel reading activity, search variables and search success Pearson correlation coefficients were calculated. Based on this information, path models were built for predicting search success, i.e. the interest ratings of the novels found. Investing effort on examining results improves search success, i.e. finding interesting novels, whereas effort in querying has no bearing on it. Novel reading activity was not associated with search process and effort and success variables observed. The results suggest, that in designing systems for fiction retrieval, enriching result presentation with more detailed book information would benefit users in identifying good novels. Title Unlocking radio broadcasts: user needs in sound retrieval Abstract This poster reports the preliminary results of a user study uncovering the information seeking behaviour of humanities scholars dedicated to radio research. The study is part of an interdisciplinary research project on radio culture and auditory resources. The purpose of the study is to inform the design of information architecture and interaction design of a research infrastructure that will enable future radio and audio based research. Results from a questionnaire survey on humanities scholars' research interest and information needs, preferred access points, and indexing levels are reported. Finally, a flexible metadata schema is suggested, that includes both general metadata and highly media and research project specific metadata. Title 'Erasmus': an organization- and user-centered dublin core metadata tool Abstract Digital library interoperability is supported by good quality metadata. The design of metadata creation and management tools is therefore an important component of overall digital library design. A number of factors affect metadata tool usability, including task complexity, interface usability, and organizational context of use. These issues are being addressed in the user-centered design of a metadata tool for the Internet Public Library. Title Categorization of computing education resources with utilization of crowdsourcing Abstract The Ensemble Portal harvests resources from multiple heterogeneous federated collections. Managing these dynamically increasing collections requires an automatic mechanism to categorize records in to corresponding topics. We propose an approach to use existing ACM DL metadata to build classifiers for harvested resources in the Ensemble project. We also present our experience with utilizing the Amazon Mechanical Turk platform to build ground truth training data sets from Ensemble collections. CCS Applied computing Computers in other domains Publishing Title Challenges in generating bookmarks from TOC entries in e-books Abstract ABSTRACT The task of extracting document structures from a digital e-book is difficult and is an active area of research. On the other hand, many e-books already have a table of contents (TOC) at the beginning of the document. This may lead us to believe that adding bookmarks into digital document (e-book) based on the existing TOC would be trivial. In this paper, we highlight the challenges involved in this task of automatically adding bookmarks to an existing e-book based on the TOC that exists within the document. If we are able to reliably identify the specific locations of each TOC entry within the document, the algorithms can be easily extended to identify document structures within e-books that have TOC. We describe a tool we have built called Booky that tries to add automatic PDF bookmarks to existing PDF based e-books as they have TOC as part of the document content. The tool addresses most of the challenges that have been identified while still leaving a few tricky scenarios still open. Title Ad insertion in automatically composed documents Abstract We consider the problem of automatically inserting advertisements ( Title Displaying chemical structural formulae in ePub format Abstract We describe one tool designed to enhance the visualization of chemical structural formulae in E-book readers. When dealing with small formulae, to avoid the pixelation effect with zoomed images, the formula is converted to a vectoral representation and then enlarged. On the opposite, large formulae are split in sub-images by cutting the image in suitable locations attempting to reduce the parts of the formula that are broken. In both cases the formulae are embedded in one ePub document that allows users to browse the chemical structure on most reading devices. Title Beyond PDF and ePub: toward an interactive textbook Abstract This paper describes a new and unique vision for electronic textbooks. It incorporates a number of active components such as video, code editing and execution, and code visualization as a way to enhance the typical static electronic book format. In addition, the textbook is created with an open source authoring system that has been developed to allow the instructor to customize the content of the active and passive parts of the text. Initial results of a semester long trial are presented as well. Title Personalized newscasts and social networks: a prototype built over a flexible integration model Abstract The way we watch television is changing with the introduction of attractive Web activities that move users away from TV to other media. The integration of the cultures of TV and Web is still an open issue. How can we make TV more open? How can we enable a possible collaboration of these two different worlds? TV-Web convergence is much more than placing a Web browser into a TV set or putting TV content into a Web media player. The NoTube project, funded by the European Community, is demonstrating how an open and general set of tools adaptable to a number of possible scenarios and allowing a designer to implement the targeted final service with ease can be introduced. A prototype based on the NoTube model in which the Smartphone is used as secondary screen is presented. The video demonstration [11] is available at http://youtu.be/dMM7MH9CZY8. Title Open for business Abstract Should academic articles be available for free on the Web? NA Title Now that's news: substitution and culture in electronic newspaper adoption in Scandinavia Abstract This paper investigates the intent to use electronic newspaper in three Scandinavian countries. It explores the influence of perceived technology substitution, cultural factors as well as perceived ease of use and usefulness. Electronic newspaper is seen as a substitute to the printed kind that is distributed digitally on e-reader platforms. The data came from 1804 surveys administered in Norway, Sweden and Denmark. The results indicate that perceived substitution is the most important driver behind the intent to use of electronic newspaper, while culture has little or no effect. These results contribute to the nascent research on how the superiority of perceived substitutive functionality of one technological artifact over another may lead to the adoption of the superior artifact. It also calls into question the role of culture in technology adoption. Title Paginate dynamic and web content Abstract Highly customized and content driven documents present substantial challenges in producing sophisticated layout. In fact, these are apps that usually look like well-designed documents. A concrete example is e-books. E-books have re-flowing requirements to allow the user to read them on a plethora of devices as wells as change the font size and font style. Meanwhile this increases the flexibility of the medium, it loses common features found in books like footnotes, marginalia (a.k.a. side notes), pull-quotes and, floats. This paper introduces an approach on extending the concept of galley to a generalized document design instrument. The proposed solution has the aim of providing an easy and flexible, yet powerful, way to express complex layout for highly dynamic and re-flowing content. To achieve this goal, not only it is important to express all the areas available within the page or page region, but also identify a mean to efficiently map content to them. To serve this purpose, a role based mapper has been introduced linking both flow and out-of-flow content. NA Title Probabilistic document model for automated document composition Abstract We present a new paradigm for automated document composition based on a generative, unified probabilistic document model (PDM) that models document composition. The model formally incorporates key design variables such as content pagination, relative arrangement possibilities for page elements and possible page edits. These design choices are modeled jointly as coupled random variables (a Bayesian Network) with uncertainty modeled by their probability distributions. The overall joint probability distribution for the network assigns higher probability to good design choices. Given this model, we show that the general document layout problem can be reduced to probabilistic inference over the Bayesian network. We show that the inference task may be accomplished efficiently, scaling linearly with the content in the best case. We provide a useful specialization of the general model and use it to illustrate the advantages of soft probabilistic encodings over hard one-way constraints in specifying design aesthetics. Title Towards a faithful visualization of historical books on e-book readers Abstract The faithful visualization of historical documents on e-book devices and tablet computers is addressed in this paper. To this purpose, digitized books should be converted to re-flowable formats where the characters are easily re-sized. This is accomplished by first analyzing the document to extract the characters that are then clustered and replaced by prototypes. The prototypes are represented as SVG objects and then arranged in the proper position in the converted document. Among other applications, the proposed conversion can be used to allow visitors of archives and exhibitions to easily browse and consult historical documents on dedicated devices or on personal mobile devices that support standard re-flowable formats. The system is quantitatively tested on the well known UW-I dataset by computing OCR errors on the original images and on the reconstructed ones. The visual rendering of historical documents is evaluated on a digitized book of the XIX-th Century. CCS Applied computing Computers in other domains Military CCS Applied computing Computers in other domains Cartography Title Conflation of road network and geo-referenced image using sparse matching Abstract This paper presents an automatic approach to rectify misalignments between a geo-referenced Very High Resolution (VHR) optical image (raster) and a road database (vector). Due to inconsistent representations of road objects in different data sources, the extraction and validation of the homologous road features are complicated. The proposed Sparse Matching (SM) approach is able to smoothly snap the road features from the vector database to their corresponding road features in the VHR image. This novel conflation approach includes three main steps: linear feature preprocessing; sparse matching; feature transformation. Instead of directly extracting the complete road network from the image, which is still a challenging topic for the image processing community, the linear features as road candidates are extracted using Elastic Circular Mask (ECM) and the existing noises are filtered by means of perceptual factors via Genetic Algorithm (GA). With the sparse matching approach, the correspondence between the road candidates from the image and the road features from the vector database can be maximized in terms of geometric and radiometric characteristics. Finally, we compare the transformation results from two different transformational functions i.e. the piecewise Rubber-Sheeting (RUBS) approach and the Thin Plate Splines (TPS) approach for the matched features. The main contributions of this proposed approach include: 1) A novel sparse matching approach especially for conflation framework; 2) Efficient noise filtering in the results from the ECM detector and the GA approach; 3) Numerical comparison of two popular transformational functions. The proposed method has been tested for variant imagery scenario and over 80 percent correct ratio can be achieved from our experiment, at the same time, the average Root Mean Square (RMS) value decreases from 30 meter to less than 10 meter, which makes it possible to use snake-based algorithm for further process. Title Cartography and information presentation: a graphics/visualization perspective Abstract The purpose of a map is to present information about the earth. For millennia cartographers have perfected the craft of map-making, in the process discovering many design principles that now form the basis of cartographic information presentation. One of the challenges facing all of us is how to integrate these traditional principles into modern geographic information systems. Not surprisingly, many of these cartographic principles apply to other forms of visualization. The first part of the presentation describes how cartographic thinking has informed information visualization. Information visualization research has benefited enormously from the work of great cartographers including Jacques Bertin and Eduard Imhof. The second part presents examples where ideas from information visualization, and progress in automating graphic design, have led to new ways to make maps. A major goal of future research should be to enable computers to present information effectively using a well-designed and beautiful map. Title Name-ethnicity classification from open sources Abstract The problem of ethnicity identification from names has a variety of important applications, including biomedical research, demographic studies, and marketing. Here we report on the development of an ethnicity classifier where all training data is extracted from public, non-confidential (and hence somewhat unreliable) sources. Our classifier uses hidden Markov models (HMMs) and decision trees to classify names into 13 cultural/ethnic groups with individual group accuracy comparable accuracy to earlier binary (e.g., Spanish/non-Spanish) classifiers. We have applied this classifier to over 20 million names from a large-scale news corpus, identifying interesting temporal and spatial trends on the representation of particular cultural/ethnic groups. Title Map-labelling with a multi-objective evolutionary algorithm Abstract Title Presenting route instructions on mobile devices Abstract Title A map generalization model based on algebra mapping transformation Abstract Title An empirical study of algorithms for point-feature label placement Abstract Title Expert systems in government: a look at the redistricting problem Abstract Title Geographic information systems: digital technology with a future? Abstract CCS Applied computing Computers in other domains Agriculture CCS Applied computing Computers in other domains Computing in government CCS Applied computing Computers in other domains Personal computers and PC applications CCS Applied computing Operations research Consumer products Title Virtual environment for surprises Abstract Creation of a virtual interactive and highly evolved environment with Surprises characters. Title PalmRC: imaginary palm-based remote control for eyes-free television interaction Abstract User input on television (TV) typically requires a mediator device, such as a handheld remote control. While being a well-established interaction paradigm, a handheld device has serious drawbacks: it can be easily misplaced due to its mobility and in case of a touch screen interface, it also requires additional visual attention. Emerging interaction paradigms like 3D mid-air gestures using novel depth sensors, such as Microsoft's Kinect, aim at overcoming these limitations, but are known to be e.g. tiring. In this paper, we propose to leverage the palm as an interactive surface for TV remote control. Our contribution is three-fold: (1) we explore the conceptual design space in an exploratory study. (2) Based upon these results, we investigate the effectiveness and accuracy of such an interface in a controlled experiment. And (3), we contribute PalmRC: an eyes-free, palm-surface-based TV remote control, which in turn is evaluated in an early user feedback session. Our results show that the palm has the potential to be leveraged for device-less and eyes-free TV remote interaction without any third-party mediator device. Title Preliminary experimentation about interactive spiral knowledge mining based on conjoint analysis Abstract In order to point out the best utility of product, with the help of questionnaire, marketers need to clearly understand the preference and choice of the consumers. However, this preference may depend of a lot of parameters and it may difficult for the marketers to be sure of the consistency of consumer's responses. If consumers could receive a feedback or compare their results with other consumers, it will help the marketers to get more precise analysis' results. This paper proposes to construct a web-based Interactive Spiral Questionnaire based on Conjoint Analysis with Stepwise Refinement and Personal Diagnosis helping for Social Norm Comparison. Title Recommender systems at the long tail Abstract Recommender systems form the core of e-commerce systems. In this paper we take a top-down view of recommender systems and identify challenges, opportunities, and approaches in building recommender systems for a marketplace platform. We use eBay as an example where the elaborate interaction offers a number opportunities for creative recommendations. However, eBay also poses complexities resulting from high sparsity of relationships. Our discussion can be generalized beyond eBay to other marketplaces. Title An augmented reality third landscape Abstract Leaf++ is an ubiquitous, interstitial information tool. It is designed as a new "eye" that can be used to look at the natural landscape of our cities. It is designed to help us Leaf++ is an augmented reality system which employs computer vision techniques to recognize plants from their leaves, and allows us to associate them with digital information, interactive experiences, and generative aesthetic experiences whose purpose is to create a disseminated, ubiquitous, accessible form of interaction with the natural environment which allows the creation of a suggestive, exciting and, most of all, desirable and accessible contact with the knowledge, wisdom and awareness about the inhabitants of the natural ecosystem in our surroundings. This paper presents supporting theories, methodological approaches used in the research and project, technological strategy and two use cases: for education and art performance. Title Towards a better understanding of mobile shopping assistants: a large scale usage analysis of a mobile bargain finder application Abstract Mobile shopping assistants have been subject to research in the field of ubiquitous and pervasive computing for many years. Now the wide adoption of mobile shopping applications for smartphones allows evaluation on a large scale. To study how consumers actually use these applications, we analyze server logs of a mobile bargain finder application for the iPhone used by 33,000 users over a period of six months. In this paper we discuss our approach, the methods we have used, and some challenges and limitations we have encountered. First results indicate that contrary to the focus of most research in the field the application is used rather from home than at the point of sale or on the go. Title My grandfather's iPod: an investigation of emotional attachment to digital and non-digital artefacts Abstract -- to explore the nature and dimensions of attachment to digital and non-digital artefacts and explicate any differences in emotional attachment between digital and non-digital artefacts. -- Repertory grid based study -- no clear distinctions between attachment to digital and non-digital artefacts -- need to explore the underlying factors further in particular in relation to age and gender -- complements earlier reported studies which suggest that digital artefacts are much less likely to afford attachment -- digital artefacts do not pose unique challenges for sustainable interaction design Title A system for safe flash-heat pasteurization of human breast milk Abstract We present ongoing development of a low-cost system to improve the flash-heat pasteurization process for human breast milk currently utilized in resource-constrained developing regions. Flash-heat was designed for low-resource environments, is simple to use and requires minimal infrastructure. It is currently used at a small-scale to provide safe breast milk to vulnerable infants with special needs. Safety concerns have limited the adoption of this method for use in human milk banks. The system presented in this paper improves the safety and procedural compliance of the flash-heat process by continuously monitoring the temperature of milk as it is being pasteurized, providing feedback to the user performing the procedure and bringing-in remotely-located quality assurance personnel into the process-approval loop. In partnership with PATH, a Seattle-based NGO, the system will be piloted at a human milk bank in South Africa later this year. The longer-term vision of the project is that the improved monitoring, feedback and reporting capabilities will help scale-up the adoption of cost-effective flash-heat pasteurization for establishing human milk banks in developing countries. We present results from in-lab experiments that have helped us assess the feedback capabilities of our system and have validated the need for having a temperature monitoring and feedback system to enhance the safety of the flash-heat process. Title A study on the impact of product images on user clicks for online shopping Abstract In this paper we study the importance of image based features on the click-through rate (CTR) in the context of a large scale product search engine. Typically product search engines use text based features in their ranking function. We present a novel idea of using image based features, common in the photography literature, in addition to text based features. We used a stochastic gradient boosting based regression model to learn relationships between features and CTR. Our results indicate statistically significant correlations between the image features and CTR. We also see improvements to NDCG and mean standard regression. Title Introduction to display advertising: a half-day tutorial Abstract Display advertising is one of the two major advertising channels on the web (in addition to search advertising). Display advertising on the Web is usually done by CCS Applied computing Operations research Industry and manufacturing CCS Applied computing Operations research Computer-aided manufacturing Title A metric-based safety workflow for electric/electronic architectures of vehicles Abstract The ISO 26262 - Title Computer integrated fixture design system Abstract Flexible fixturing is an essential ingredient of flexible manufacturing systems (FMS) and computer-integrated manufacturing systems (CIMS). Computer-aided fixture design (CAFD) has become a research focus in implementing FMS and CIMS. This paper, presents an overview of interactive computer software developed with CAD interface for designing fixtures for machining centers (HMC/VMC). An exhaustive set of structured queries incorporated in the preprocessor prompts the designer to extract qualitative and quantitative part features. The database and decision support system (i.e., the rule base and the knowledge base) built into the design module assist the designer to select and position locators, calculate clamping forces, decide number and types of clamps, their locations and orientations, etc. Finally, in the post processor the bill of material, part and assembly drawings are obtained. A case study for the design of fixture for a Roller-head is presented and results are discussed. The implementation of the system has proved to be a quick and effective tool that reduces the design lead time from a few days to a few hours and requires fractional efforts and expertise on the part of the designer. Title Introduction to CNC routing for prototyping and manufacturing Abstract This Studio will give an introduction to Subtractive Digital Fabrication, using a ShopBot CNC (Computer Numeric Control) Tool, and explore options for fast local manufacturing of precise project pieces large and small. It will involve both theory and hands-on components that will give people involved in building tangible embedded and embodied interfaces an overview of the processes involved in creating prototypes and manufacturing components using these type of tools. Title EYECane: navigating with camera embedded white cane for visually impaired person Abstract We demonstrate a novel assistive device which can help the visually impaired or blind people to gain more safe mobility, which is called as "EYECane". The EYECane is the white-cane with embedding a camera and a computer. It automatically detects obstacles and recommends some avoidable paths to the user through acoustic interface. For this, it is performed by three steps: Firstly, it extracts obstacles from image streaming using online background estimation, thereafter generates the occupancy grid map, which is given to neural network. Finally, the system notifies a user of an paths recommended by machine learning. To assess the effectiveness of the proposed EYECane, it was tested with 5 users and the results show that it can support more safe navigation, and diminish the practice and efforts to be adept in using the white cane. Title Discrete event simulation to generate requirements specification for sustainable manufacturing systems design Abstract A sustainable manufacturing systems design using processes, methodologies, and technologies that are energy efficient and environmental friendly is desirable and essential for sustainable development of products and services. Efforts must be made to create and maintain such sustainable manufacturing systems. Discrete Event Simulation (DES) in combination with Life Cycle Assessment (LCA) system can be utilized to evaluate a manufacturing system performance taking into account environmental measures before actual construction or use of the manufacturing system. In this paper, we present a case study to show how DES can be utilized to generate requirements specification for manufacturing systems in the early stages of the design phase. Requirement specification denotes the description of the behavior of the system to be developed. The case study incorporates use of LCA data in combination with DES. Data for the model in the case study is partly provided through the format supported by the Core Manufacturing Simulation Data (CMSD) standardization effort. The case study develops a prototype paint shop m