Weekly Seminar Series
Details of our weekly seminar series, including timetable and contact details, can be found here
Thu 15th December 2011, 12:00pm (CS/414, fourth floor of the Computer Science building, West Square, Queen Mary University of London)
Adrian Bangerter, University of Neuchâtel
"Suspending and Reinstating Joint Activities With Dialogue"
Task-switching and interruptions are commonplace in everyday life and have been well-studied in individuals. I will extend this research to the question of how people use communication to manage such processes in collaborative tasks, using a theory of communication as a form of joint activity. By this theory, suspending and reinstating collaborative tasks involves two main constraints: tracking the common ground between participants and managing the social relations that may be threatened by the interruption. Participants in joint activities use a range of verbal and nonverbal signals to manage these constraints. I will present experimental and field data to illustrate these issues.
Thu 30th June 2011, 4:30pm (CS/414, fourth floor of the Computer Science building, West Square, Queen Mary University of London)
Greg Mills, Stanford University
"The emergence of procedural conventions in dialogue"
Existing models of dialogue emphasize the importance of interaction in explaining how referential conventions are established and sustained. However, co-ordination in dialogue requires both co-ordination of content and process. To investigate procedural co-ordination we report a collaborative task which presents participants with the recurrent co-ordination problem of ordering their actions and utterances into a single coherent sequence. The results provide evidence of interlocutors developing collaborative routines which become conventionalized within a group of language users.
Thu 17th February 2011, 1pm (ITL Building, ground floor meeting room, West Square, Queen Mary University of London)
Ronnie Cann, University of Edinburgh
"Cognate Objects: Events, Pseudoarguments or Objects?"
The existence of apparent intransitive verbs appearing with an apparent direct object that is either directly cognate to the verb or semantically related in some way has been known since antiquity and gives rise to an erratic, but persistent, series of discussions in the present linguistic literature. In this paper, I review the properties of Cognate Object Constructions in English and Classical Greek and explore their analysis in Dynamic Syntax, suggesting that there are two primary types: eventive COCs and pseudoargumental ones that differ subtly in interpretation and syntactic properties.
Ronnie Cann is Reader in Linguistics at the University of Edinburgh.
Thu 21st October 2010, 3pm (CS/414, fourth floor of the Computer Science building, West Square)
Dr. Dale Barr, University of Glasgow
"Audience design as an incremental achievement"
How do speakers tailor their speech for their addressees informational needs? We investigated this question by having speakers develop "routines" for describing particular referents with one addressee during a training phase. During a later test phase, these speakers then either continued describing the same referents to the same addressee, or to a new addressee. In addition to production measures, we examined measures of on-line processing including speech onset latency and eye gaze behavior. The results suggest that the earliest moment of utterance planning involve automatic retrieval of routines from memory, which are then re-tailored to current discourse needs.
Dale Barr is a Senior Lecturer in Psychology at the University of Glasgow.
Wednesday 10 November 2010, 6.30pm (Drapers Lecture Theatre, Geography Building)
Professor Pat Healey Inaugural Lecture
"Communication as a Special Case of Misunderstanding"
Communication is at the core of human social organisation. It is also a fragile process. People often differ in their interpretation of words, gestures and even entire conversations. This raises a basic question: how is communication possible at all?
Patrick Healey is leader of the Interaction, Media and Communication Group (IMC) and Co-Director of the Media and Arts Technology Programme at Queen Mary, University of London. He holds a BSc in Behavioural Science (Psychology and Zoology Jt. Hons.) from the University of Nottingham and an MSc and PhD in Cognitive Science from the University of Edinburgh.
A reception will follow the lecture in the Queens Building Foyer
If you wish to attend please follow this link and book to attend.
Wed 30th June 2010, 3pm (ITL Top Floor Meeting Room, West Square)
(preceded at 2.45pm by tea and followed by a reception, both in the Informatics Hub)
Dr. Gil Weinberg (Georgia Institute of Technology)
"Extending the Musical Experience - From the Physical to the Digital, and Back"
Over the last 15 years I have explored a number of research directions in which digital technology bears the promise of innovating the core of the musical experience. I experimented with novel gestural expression, collaborative networks, and constructionist learning - research areas that bear the promise of leading to musical experiences that cannot be facilitated by traditional means. My exploration of new gestural expression builds on the notion that through novel sensing and mapping techniques, new expressive musical gestures can be discovered that are not supported by current acoustic instruments. Such gestures, unconstrained by the physical limitation of acoustic sound production, can provide new possibilities for expressive and creative musical experiences for novice as well as trained musicians. Remote and local digital networks can revolutionize collaborative musical experiences by allowing players to take an active role in determining and influencing not only their own musical output but also that of their co-performers. By using the network to interdependently share and control musical materials in a group, musicians can combine their musical ideas into a constantly evolving collaborative musical activity that is novel and inspiring. I also developed constructionist learning musical systems, which bear the promise to enhance music education by providing hands-on access to programmable music making. Through interaction with physical computational objects, learners can construct personally meaningful musical artifacts that enhance and deepen their learning.
While facilitating novel musical experiences that cannot be achieved by traditional means, the digital nature of these projects often led to flat and inanimate speaker-generated sound, hampering the physical richness and visual expression and embodiment of acoustic music. In my current work, therefore, I attempt to combine the benefits of digital computation and acoustic richness, by exploring the concept of "Robotic Musicianship." I define this concept as a combination of musical, perceptual, and social skills with the capacity to produce rich acoustic responses in a physical and visual manner. The robotic musicianship project aims to combine human creativity, emotion, and aesthetic judgment with algorithmic computational capabilities, allowing human and robotic players to cooperate and inspire each other to push music forward to unexplored domains.
Biography: Gil Weinberg is the Director of Music Technology at Georgia Institute of Technology. Dr. Weinberg received his M.S. and Ph.D. degrees in Media Arts and Sciences from MIT, after co-founding and holding positions in music and media software industry in his home country of Israel. In his academic work Weinberg attempts to expand musical expression, creativity, and learning through meaningful applications of technology. His research interests include new instruments for musical expression, musical networks, machine and robotic musicianship, sonification, and music education. Weinberg's music has been featured in many festivals and concerts. He has published more than 40 peer-reviewed papers. Based on his most recent project - a set of musical applications for cell phones that allow children and novices to create music in expressive and intuitive manner - he is has established a startup company, ZOOZ Mobile.
Fri 28th May 2010, 2pm (CS/414, Fourth floor of the Computer Science building, West Square)
Laurel Riek (University of Cambridge)
"Social human-robot interaction: facilitating non-verbal communication between people and robots."
As robots start to leave factories and begin to enter our schools, workplaces, and homes, it is important that people are able to interact with them in a way that is comfortable and natural to them. Eventually this might be via natural language dialogue, but given the complexities of language that may be not be available for awhile. In the meantime, another approach is to allow people to communicate with robots using non-verbal communication, such as gestures and facial expressions. In addition to ensuring robots are capable of accurately sensing and interpreting human non-verbal cues, it is important humans are able to accurately understand the cues robots make. This talk will describe several experiments we have conducted using both humanoid and zoomorphic robots which explore various aspects of this problem, including people's cooperation with, empathy toward, and ability to build rapport with interactive robots.
Laurel Riek is a PhD candidate at the University of Cambridge Computer Laboratory. She researches natural human-robot interaction, in particular, facilitating non-verbal communication with robots. Her research explores expression synthesis on android and humanoid robots using naturally evoked human data. She also explores sustainable interaction with robots by applying social signal processing techniques to the analysis of dyadic human conversations. Prior to starting her PhD, she worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE, on projects involving search and rescue robots, unmanned vehicles, and human language technology. She received her BSc in Logic and Computation from Carnegie Mellon University in 2000.
Tue 30th March 2010, 3pm (CS/414, Fourth floor of the Computer Science building, West Square)
Prof. J. P. de Ruiter (Bielefeld University)
"Null hypothesis significance testing: why its application is leading us astray, and what we can and cannot do about it."
The standard inferential statistics that is normally taught in psychology and related fields is "null hypothesis significance testing" (NHST). This is a strange and oddly inconsistent hybrid between the ideas of Robert Fisher and his theoretical enemies Jerzy Neyman and Egon Pearson. Many very smart methodologists and statisticians have warned that this statistical framework, at least in the way that it is normally taught and used in publications, is deeply flawed. And although their arguments have never been countered, their message has been largely ignored. Although many psychologists and cognitive scientists have a vague feeling that something is not entirely kosher about NHST, they often don't know why, and don't know any alternative. But as long as we are judged by the quantity of our publications, and journals and reviewers keep requiring us to make our cases using NHST, we seem to have little choice but to keep performing this mechanical ritual.
In my (hopefully highly interactive) talk, I will briefly talk about the history of inferential statistics in the empirical social sciences, and explain what is wrong with NHST. I will give some real examples of how this can lead to absurd situations and bad science. I will also briefly mention a good (or at least consistent) alternative, Bayesian statistics, and its pros and cons. Finally I will argue that even though we have no real choice but to keep reporting the statistics that journals and reviewers want us to, the least we can do is show in our reporting that we know what w`e are doing (and what not) when we are forced to do NHST.
Thu 4th February 2010, 2pm (Bancroft Road teaching room BR 4.01)
Thor Magnusson (University of Sussex)
I will talk about three different projects I've been working on recently. One using webcam and neural networks to recognise hand gestures, another using the iLog to design instruments, and finally the ixi lang (a livecoding language). I am interested in studying the "epistemic" nature of digital musical systems, trying to analyse what it is that we are doing when designing such systems. I will also talk about the ideology behind the ixiQuarks (a graphical user interface instruments aimed at sketching and improvisation distributed as SuperCollider classes and as a standalone application).
In my research I have been investigating the nature of making creative tools in the digital realm, through an active, philosophically framed and ethnographically inspired study, of both practical and theoretical engagement. The study questions the nature of digital musical instruments, particularly in comparison with acoustic instruments. Through an enquiry of material epistemologies, the dichotomy between the acoustic and the digital is employed to illustrate the epistemic nature of digital artefacts, giving rise to a theory of epistemic tools.
Mon 1st February 2010, 4pm (Bioinformatics Hub)
Paul Hampton (C-Innovate): Practicalities of conducting ethnomethdological research
Paul Hampton has been working in the interaction design domain ever since completing his MSc in Human-Centred Computer Systems at Sussex University in 2001. For the last 4 years he has been running C-innovate Ltd, a user experience consultancy that has done considerable work supporting the implementation of mobile computing systems in public sector organisations.
Paul will be discussing the practicalities of conducting ethnomethodological research, illustrating his talk with examples from studies within the Police.
Thu 28th January 2010, 3pm (Bioinformatics Hub)
Johann Issartel (Dublin City University): Wavelet transform analysis of human motor behaviour.
One objective of this presentation will be to present a new method for analyzing non-stationary signals in human motor behavior: the wavelet transform (WT). We will particularly focus on the cross-wavelet transform (an extension of the WT) that gives information about interactions between two signals. Although this method finds its application in several fields like physiology (Jobert et al., 1994) and neuroscience (Karrasch et al., 2004), almost no studies exist in experimental psychology and particularly in motor control.
The cross-wavelet transform (CWT) has two major advantages over other more classical methods. Firstly, the CWT permits us to characterize the nature of the interactions (e.g. motor coordination) between two signals regardless of the nature of data (stationary and non-stationary). Secondly, this method permits us to analyze the temporal evolution of the frequency, amplitude and phase properties of a (non-stationary) signal. The originality of the CWT (and the WT) is to transform the signal into a time-frequency representation that contains the same amount of information as the original time-series.
Another objective will be to show that one can make sense of any kind of data in experimental psychology with that method. To do that, we will choose to present different kinds of data to illustrate this purpose.