Tag Archives: Publications

New paper: “Facilitating collaborative learning between two primary schools using large multi-touch devices”

This week we had a paper published in Springer’s Journal of Computers in Education, entitled: Facilitating collaborative learning between two primary schools using large multi-touch devices. This paper is the first output from a collaborative project between Durham University’s School of Education (led by Andrew Joyce-Gibbons) and Cardiff Metropolitan University (led by Gary Beauchamp), focusing on computer-supported collaborative learning through multi-touch devices.

The abstract of the paper is below; you can read the full paper (or download a PDF) online:

Facilitating collaborative learning between two primary schools using large multi-touch devices

James McNaughton, Tom Crick, Andrew Joyce-Gibbons, Gary Beauchamp, Nick Young and Elaine Tan

This paper presents a technical case study and the associated research software/hardware underpinning an educational research trial in which large touchscreen interfaces were used to facilitate collaborative interactions between primary school students at separate locations. As part of the trial, an application for supporting a collaborative classroom activity was created which allowed students at either location to transfer resources to the students at the other via a ‘flick’ gesture. The trial required several novel innovations to the existing SynergyNet software framework to enable it to support synchronous remote collaboration. The innovations enabled the first successful classroom collaboration activities between two separate locations within the United Kingdom using large touchscreen interfaces. This paper details the challenges encountered in implementing these innovations and their solutions.

Keywords: Multi-touch devices; Gestures; Computer-supported collaborative learning; SynergyNet; Networking; ICT

DOI: 10.1007/s40692-017-0081-x

 
(also see: Publications)

Tagged , ,

Paper in SNAM: “Measuring UK crime gangs: a social network problem”

In July, we had a paper accepted for publication in Springer’s Social Network Analysis and Mining, entitled: Measuring UK crime gangs: a social network problem. This paper builds upon our previous work on social networks and crime analytics, using an interesting gun and gang crime dataset from Greater Manchester Police over a seven-year period.

The abstract of the paper is below; you can access it via Springer’s SharedIt service or our final pre-print on GitHub:

Measuring UK crime gangs: a social network problem

Giles Oatley and Tom Crick

This paper describes the output of a study to tackle the problem of gang-related crime in the UK; we present the intelligence and routinely-gathered data available to a UK regional police force, and describe an initial social network analysis of gangs in the Greater Manchester area of the UK between 2000 and 2006. By applying social network analysis techniques, we attempt to detect the birth of two new gangs based on local features (modularity, cliques) and global features (clustering coefficients). Thus for the future, identifying the changes in these can help us identify the possible birth of new gangs (sub-networks) in the social system. Furthermore, we study the dynamics of these networks globally and locally, and have identified the global characteristics that tell us that they are not random graphs—they are small world graphs—implying that the formation of gangs is not a random event. However, we are not yet able to conclude anything significant about scale-free characteristics due to insufficient sample size. A final analysis looks at gang roles and develops further insight into the nature of the different link types, referring to Klerks’ ‘third generation’ analysis, as well as a brief discussion of the potential UK policy applications of this work.

Keywords: Gangs; Gun crime; Scale-free networks; Small-world networks; Social distance; Communities; Crime policy

DOI: 10.1007/s13278-015-0265-1

 
(also see: Publications)

Tagged , , ,

Paper at SoSE 2015: “Smart data-harnessing for financial value in short-term hire electric car schemes”

Last month we presented a paper at the 10th IEEE System of Systems Engineering Conference (SoSE 2015) in Texas, entitled: Smart data-harnessing for financial value in short-term hire electric car schemes. This paper is one of the outputs from a collaboration with the University of Bristol’s Systems Centre in the Faculty of Engineering (led by Theo Tryfonas), focusing on smart city infrastructure, big data and monitoring.

The abstract of the paper is below; you can read the full paper (or download a PDF) online:

Smart data-harnessing for financial value in short-term hire electric car schemes

Peter Cooper, Tom Crick and Theo Tryfonas

In the developed world, two distinct trends are emerging to shake-up the current dominance of privately-owned, combustion motor car transport. The first is the emergence of the electric powertrain for vehicles as an affordable and mass-marketed means of transport. This carries with it the potential to address many of the immediate shortcomings of the current paradigm, especially CO2 emissions, air and noise pollution. The second is the rise of new hire models of car ownership — the concept of paying for the use of a car as and when you need it. This carries with it the potential to address many of the existing issues: outlay-induced car use, residential parking and social division. On a similar timescale, we are witnessing the rise of smart technologies and smart cities, concepts that use data about the state of a system or elements of it to create value.

There have been relatively few examples of schemes that have combined the electric and hire-model concepts, despite the huge potential for synergy. Indeed, the majority is against them on both counts — cars are predominantly privately-owned and driven by internal combustion engines. Nevertheless, there is significant potential for this to change over the coming years.

Keywords: Electric Vehicles; Vehicle Hire Models; Smart Technologies; Smart Monitoring; Smart Cities; Big Data; Environmental Impact

DOI: 10.1109/SYSOSE.2015.7151928

 

Tagged , , ,

New paper: “Top Tips to Make Your Research Irreproducible”

It is an unfortunate convention of science that research should pretend to be reproducible; we have noticed (and contributed to) a number of manifestos, guides and top tips on how to make research reproducible, but we have seen very little published on how to make research irreproducible.

Irreproducibility is the default setting for all of science, and irreproducible research is particularly common across the computational sciences (for example, here and here). The study of making your work irreproducible without reviewers complaining is a much neglected area; we feel therefore that by encapsulating our top tips on irreproducibility, we will be filling a much-needed gap in the domain literature. By following our tips, you can ensure that if your work is wrong, nobody will be able to check it; if it is correct, you can make everyone else do disproportionately more work than you to build upon it. Our top tips will also help you salve the conscience of certain reviewers still bound by the fussy conventionality of reproducibility, enabling them to enthusiastically recommend acceptance of your irreproducible work. In either case you are the beneficiary.

  1. Think “Big Picture”. People are interested in the science, not the experimental setup, so don’t describe it.
  2. Be abstract. Pseudo-code is a great way of communicating ideas quickly and clearly while giving readers no chance to understand the subtle implementation details that actually make it work.
  3. Short and sweet. Any limitations of your methods or proofs will be obvious to the careful reader, so there is no need to waste space on making them explicit.
  4. The deficit model. You’re the expert in the domain, only you can define what algorithms and data to run experiments with.
  5. Don’t share. Doing so only makes it easier for other people to scoop your research ideas, understand how your code actually works instead of why you say it does, or worst of all to understand that your code doesn’t work at all.

Read the full version of our high-impact paper on arXiv.

Tagged , , , ,

Paper submitted to CAV 2015: “Dear CAV, We Need to Talk About Reproducibility”

Today, me, Ben Hall (Cambridge) and Samin Ishtiaq (Microsoft Research) submitted a paper to CAV 2015, the 27th International Conference on Computer Aided Verification, to be held in San Francisco in July. CAV is dedicated to the advancement of the theory and practice of computer-aided formal analysis methods for hardware and software systems; the conference covers the spectrum from theoretical results to concrete applications, with an emphasis on practical verification tools and the algorithms and techniques that are needed for their implementation.

In this paper we build upon our recent work, highlighting a number of key issues relating to reproducibility and how they impact on the CAV (and wider computer science) research community, proposing a new model and workflow to encourage, enable and enforce reproducibility in future instances of CAV. We applaud the CAV Artifact Evaluation process, but we need to do more. You can download our arXiv pre-print; the abstract is as follows:

How many times have you tried to re-implement a past CAV tool paper, and failed?

Reliably reproducing published scientific discoveries has been acknowledged as a barrier to scientific progress for some time but there remains only a small subset of software available to support the specific needs of the research community (i.e. beyond generic tools such as source code repositories). In this paper we propose an infrastructure for enabling reproducibility in our community, by automating the build, unit testing and benchmarking of research software.

 
(also see: GitHub repo)

Tagged , , , , ,

Paper submitted to Recomputability 2014: “Share and Enjoy”: Publishing Useful and Usable Scientific Models

Last month, me, Ben Hall, Samin Ishtiaq and Kenji Takeda (all Microsoft Research) submitted a paper to Recomputability 2014, to be held in conjunction with the 7th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2014) in London in December. This workshop is an interdisciplinary forum for academic and industrial researchers, practitioners and developers to discuss challenges, ideas, policy and practical experience in reproducibility, recomputation, reusability and reliability across utility and cloud computing. It aims to provide an opportunity to share and showcase best practice, as well as to offering a platform to further develop policy, initiatives and practical techniques for researchers in this domain.

In our paper, we discuss a number of issues in this space, proposing a new open platform for the sharing and reuse of scientific models and benchmarks. You can download our arXiv pre-print; the abstract is as follows:

The reproduction and replication of reported scientific results is a hot topic within the academic community. The retraction of numerous studies from a wide range of disciplines, from climate science to bioscience, has drawn the focus of many commentators, but there exists a wider socio-cultural problem that pervades the scientific community. Sharing data and models often requires extra effort, and this is currently seen as a significant overhead that may not be worth the time investment.

Automated systems, which allow easy reproduction of results, offer the potential to incentivise a culture change and drive the adoption of new techniques to improve the efficiency of scientific exploration. In this paper, we discuss the value of improved access and sharing of the two key types of results arising from work done in the computational sciences: models and algorithms. We propose the development of an integrated cloud-based system underpinning computational science, linking together software and data repositories, toolchains, workflows and outputs, providing a seamless automated infrastructure for the verification and validation of scientific models and in particular, performance benchmarks.

 
(see GitHub repo)

Tagged , , , , , ,

Paper submitted to WSSSPE2: “Can I Implement Your Algorithm?”: A Model for Reproducible Research Software

Yesterday, me, Ben Hall and Samin Ishtiaq (both Microsoft Research Cambridge) submitted a paper to WSSSPE2, the 2nd Workshop on Sustainable Software for Science: Practice and Experiences to be held in conjunction with SC14 in New Orleans in November. As per the aims of the workshop: progress in scientific research is dependent on the quality and accessibility of software at all levels and it is critical to address challenges related to the development, deployment and maintenance of reusable software as well as education around software practices.

As discussed in our paper, we feel this multitude of research software engineering problems are not just manifest in computer science, but also across the computational science and engineering domains (particularly with regards to benchmarking and availability of code). We highlight a number of recommendations to address these issues, as well as proposing a new open platform for scientific software development. You can download our arXiv pre-print; the abstract is as follows:

The reproduction and replication of novel scientific results has become a major issue for a number of disciplines. In computer science and related disciplines such as systems biology, the issues closely revolve around the ability to implement novel algorithms and approaches. Taking an approach from the literature and applying it in a new codebase frequently requires local knowledge missing from the published manuscripts and project websites. Alongside this issue, benchmarking, and the development of fair, and widely available benchmark sets present another barrier. In this paper, we outline several suggestions to address these issues, driven by specific examples from a range of scientific domains. Finally, based on these suggestions, we propose a new open platform for scientific software development which effectively isolates specific dependencies from the individual researcher and their workstation and allows faster, more powerful sharing of the results of scientific software engineering.

 
(see GitHub repo)

Tagged , , , , , ,

Paper in ACM TOCE: “Restart: The Resurgence of Computer Science in UK Schools”

Further to the previous CAS papers, Neil Brown (University of Kent), Sue Sentance (formerly Anglia Ruskin University, now CAS), Simon Humphreys (CAS/BCS) and I have had a paper accepted into ACM Transactions on Computing Education: Restart: The Resurgence of Computer Science in UK Schools, part of a Special Issue on Computing Education in (K-12) Schools.

The paper will soon be available to download for free via the ACM Author-ize service (or you can download our pre-print); the abstract is as follows:

Computer science in UK schools is undergoing a remarkable transformation. While the changes are not consistent across each of the four devolved nations of the UK (England, Scotland, Wales and Northern Ireland), there are developments in each that are moving the subject to become mandatory for all pupils from age 5 onwards. In this article, we detail how computer science declined in the UK, and the developments that led to its revitalisation: a mixture of industry and interest group lobbying, with a particular focus on the value of the subject to all school pupils, not just those who would study it at degree level. This rapid growth in the subject is not without issues, however: there remain significant forthcoming challenges with its delivery, especially surrounding the issue of training sufficient numbers of teachers. We describe a national network of teaching excellence which is being set up to combat this problem, and look at the other challenges that lie ahead.

 
(see Publications)

Tagged , , , ,

Paper at HCII 2014: “Changing Faces: Identifying Complex Behavioural Profiles”

In June, my colleague Giles Oatley presented a joint paper entitled: Changing Faces: Identifying Complex Behavioural Profiles at HCII 2014, the 16th International Conference on Human-Computer Interaction in Crete.

If you do not have institutional access to SpringerLink, especially the Lecture Notes in Computer Science series, you can download our pre-print. The abstract is as follows:

There has been significant interest in the identification and profiling of insider threats, attracting high-profile policy focus and strategic research funding from governments and funding bodies. Recent examples attracting worldwide attention include the cases of Chelsea Manning, Edward Snowden and the US authorities. The challenges with profiling an individual across a range of activities is that their data footprint will legitimately vary significantly based on time and/or location. The insider threat problem is thus a specific instance of the more general problem of profiling complex behaviours. In this paper, we discuss our preliminary research models relating to profiling complex behaviours and present a set of experiments related to changing roles as viewed through large scale social network datasets, such as Twitter. We employ psycholinguistic metrics in this work, considering changing roles from the standpoint of a trait-based personality theory. We also present further representations, including an alternative psychological theory (not trait-based), and established techniques for crime modelling, spatio-temporal and graph/network, to investigate within a wider reasoning framework.

 
(see Publications)

Tagged , , , , ,

Paper at AI-2013: “‘The First Day of Summer’: Parsing Temporal Expressions with Distributed Semantics”

In December, my PhD student Benjamin Blamey presented a joint paper entitled: ‘The First Day of Summer’: Parsing Temporal Expressions with Distributed Semantics at AI-2013, the 33rd SGAI International Conference on Artificial Intelligence in Cambridge.

If you do not have institutional access to SpringerLink, especially the Research and Development in Intelligent Systems series, you can download our pre-print. The abstract is as follows:

Detecting and understanding temporal expressions are key tasks in natural language processing (NLP), and are important for event detection and information retrieval. In the existing approaches, temporal semantics are typically represented as discrete ranges or specific dates, and the task is restricted to text that conforms to this representation. We propose an alternate paradigm: that of distributed temporal semantics –- where a probability density function models relative probabilities of the various interpretations. We extend SUTime, a state-of-the-art NLP system to incorporate our approach, and build definitions of new and existing temporal expressions. A worked example is used to demonstrate our approach: the estimation of the creation time of photos in online social networks (OSNs), with a brief discussion of how the proposed paradigm relates to the point- and interval-based systems of time. An interactive demonstration, along with source code and datasets, are available online.

 
(see Publications)

Tagged , , , , ,

CAS paper at SIGCSE’13: “Bringing Computer Science Back Into Schools: Lessons From The UK”

Further to the previous CAS papers, Neil Brown (University of Kent) presented a paper entitled: Bringing Computer Science Back Into Schools: Lessons From The UK at SIGCSE’13, the 44th ACM Technical Symposium on Computer Science Education, in Denver in March.

The paper is available to download for free via the ACM Author-ize service below; you can also listen to Neil’s voice-over of the presentation slides. The abstract is as follows:


Computer science in UK schools is a subject in decline: the ratio of Computing to Maths A-Level students (i.e. ages 16–18) has fallen from 1:2 in 2003 to 1:20 in 2011 and in 2012. In 2011 and again in 2012, the ratio for female students was 1:100, with less than 300 female students taking Computing A-Level in the whole of the UK each year. Similar problems have been observed in the USA and other countries, despite the increased need for computer science skills caused by IT growth in industry and society. In the UK, the Computing At School (CAS) group was formed to try to improve the state of computer science in schools. Using a combination of grassroots teacher activities and policy lobbying at a national level, CAS has been able to rapidly gain traction in the fight for computer science in schools. We examine the reasons for this success, the challenges and dangers that lie ahead, and suggest how the experience of CAS in the UK can benefit other similar organisations, such as the CSTA in the USA.

 

ACM DL Author-ize service

Neil C. C. Brown, Michael Kölling, Tom Crick, Simon Peyton Jones, Simon Humphreys, Sue Sentance
SIGCSE ’13 Proceeding of the 44th ACM Technical Symposium on Computer Science Education, 2013


(see Publications)

Tagged , , ,

Paper at AI-2012: “R U :-) or :-( ? Character- vs. Word-Gram Feature Selection for Sentiment Classification of OSN Corpora”

In December, my PhD student Benjamin Blamey presented a joint paper entitled: R U :-) or :-( ? Character- vs. Word-Gram Feature Selection for Sentiment Classification of OSN Corpora at AI-2012, the 32nd SGAI International Conference on Artificial Intelligence in Cambridge (for which he also won the best poster prize).

If you do not have institutional access to SpringerLink, especially the Research and Development in Intelligent Systems series, you can download our pre-print. The abstract is as follows:


Binary sentiment classification, or sentiment analysis, is the task of computing the sentiment of a document, i.e. whether it contains broadly positive or negative opinions. The topic is well-studied, and the intuitive approach of using words as classification features is the basis of most techniques documented in the literature. The alternative character n-gram language model has been applied successfully to a range of NLP tasks, but its effectiveness at sentiment classification seems to be under-investigated, and results are mixed. We present an investigation of the application of the character n-gram model to text classification of corpora from online social networks, the first such documented study, where text is known to be rich in so-called unnatural language, also introducing a novel corpus of Facebook photo comments. Despite hoping that the flexibility of the character n-gram approach would be well-suited to unnatural language phenomenon, we find little improvement over the baseline algorithms employing the word n-gram language model.

(see Publications)

Tagged , , , ,

Paper at WiPSCE’12: “Grand Challenges for the UK: Upskilling Teachers to Teach Computer Science Within the Secondary Curriculum”

Further to the CAS paper presented at Koli Calling 2011 in Finland in November 2011, Sue Sentance (Anglia Ruskin University) presented a paper entitled: Grand Challenges for the UK: Upskilling Teachers to Teach Computer Science Within the Secondary Curriculum at WiPSCE’12, the 7th International Workshop in Primary and Secondary Computing Education, in Hamburg in November.

The paper is available to download for free via the ACM Author-ize service below; the abstract is as follows:


Recent changes in UK education policy with respect to ICT and Computer Science (CS) have meant that more teachers need the skills and knowledge to teach CS in schools. This paper reports on work in progress in the UK researching models of continuing professional development (CPD) for such teachers. We work with many teachers who either do not have an appropriate academic background to teach Computer Science, or who do and have not utilised it in the classroom due to the curriculum in place for the last fifteen years. In this paper we outline how educational policy changes are affecting teachers in the area of ICT and Computer Science; we describe a range of models of CPD and discuss the role that local and national initiatives can play in developing a hybrid model of transformational CPD, briefly reporting on our initial findings to date.

ACM DL Author-ize service

Sue Sentance, Mark Dorling, Adam McNicol, Tom Crick
WiPSCE ’12 Proceedings of the 7th Workshop in Primary and Secondary Computing Education, 2012


(see Publications)

Tagged , , , ,

CAS at Koli Calling 2011

In November, as part of our work with Computing At School (CAS), Sue Sentance (Anglia Ruskin University) and I submitted a paper for Koli Calling 2011, the 11th International Conference on Computing Education Research. Our paper, entitled Computing At School: Stimulating Computing Education in the UK, describes the rationale and motivation for CAS, presenting the current state of computer science education in the UK, as well as its range of initiatives to support teachers and drive curriculum and policy change.

As part of the Koli Calling 2011 programme (Sue had the pleasure of travelling to Koli National Park in Finland!), we had to produce a short video clip summarising our paper:


While some of our discussion has been supplanted by recent events, the paper is available to download for free via the ACM Author-ize service:

ACM DL Author-ize service

Tom Crick, Sue Sentance
Koli Calling ’11 Proceedings of the 11th Koli Calling International Conference on Computing Education Research, 2011


(see Publications)

Tagged , , ,

SuperLab: Future Chips

Yesterday I travelled up to the University of Bradford for the British Science Festival, one of Europe’s largest science festivals. Each year the Festival travels to a different UK location, with over 250 events, activities, exhibitions and trips taking place over a week to showcase the latest in science, technology and engineering. The theme for the 2011 Festival is “Exploring new worlds“. The British Science Festival is also the culmination of my British Science Association Media Fellowship, after working with BBC Wales (predominantly BBC Radio Wales) for the past six weeks. I will be reporting from the Festival’s press centre throughout the week.

However, I am also here for SuperLab, a joint initiative between the National Higher Education STEM Programme and the British Science Association. The National HE STEM Programme supports higher education institutions in the exploration of new approaches to recruiting students and delivering programmes of study within the Science, Technology, Engineering and Mathematics (STEM) disciplines; I have previously worked with the Welsh HE STEM “spoke” based at Swansea University.

SuperLab consists of a poster-based campaign focusing on the wide range of in-store STEM applications in a modern supermarket, for example, the physics behind barcode scanners. It was originally planned to coincide with National Science and Engineering Week 2012 (every March, NSEW showcases how the sciences and engineering relate to our everyday lives and helps to inspire the next generation of scientists), but was reorganised to be part of this year’s Festival, as part of the Science in Action exhibition.

The topic for my research poster was the microprocessor, entitled “Future Chips“, somewhat subverting the original SuperLab theme. Nevertheless, I would assert that the invention of the microprocessor has had the greatest overall impact on our lives and development — I wanted to try and highlight to a wide audience how reliant we are on the all-pervasive microprocessor (especially its multitude of applications), as well as the ubiquitous nature of technology. In doing this, I wanted to get four main themes across:

  • Swimming in a Sea of Silicon: highlighting our reliance on microprocessors;
  • Limitations of Moore’s Law: how we are hitting the limits of existing architectural models and fabrication technologies;
  • The Future is Multi-Core: the move away from a single high-speed processor to a multi-core methodology — a single computing component with numerous independent processors;
  • The Challenge: Power Efficiency: in our increasingly connected digital world, improving the energy efficiency and power consumption of the billions of devices is paramount.

SuperLab wordle

So, with thanks to the superb work from the professional designers (especially considering some of my inane scribblings), here it is:

SuperLab: Future Chips

For further reading…

If you are interested, here are the references to the research that is (briefly) mentioned in my SuperLab poster (or contact me):

I’m also involved in HiPEAC, the European Network of Excellence on High Performance and Embedded Architecture and Compilation, funded under the European Commission Seventh Framework Programme (FP7). The aim of HiPEAC is to steer and increase European research in the area of high performance and embedded computing systems and stimulate cooperation between academia and industry; for more information about HiPEAC, check out its research and activities.

Tagged , , , , ,