Category Archives: News

TeamFox SL: in the front line of the fight against Parkinson’s disease

There have been a number of reports in the media of late about a potentially significant breakthrough in the fight against Parkinson’s disease.  These reports, which have appeared on the pages of the Parkinson’s UK website, and through agencies such as Time Warner Cable News, are about a new vaccine which might slow, or even stop, the progression of the disease.

The vaccine is being developed in Austria with partial funding from the Michael J. Fox Foundation for Parkinson’s Research (referred to simply as the MJFF), and the publication of the reports on the work suggested an opportunity for me to write about the ongoing work of TeamFox SL here in Second Life in the battle to find a lasting cure for Parkinson’s disease, and in helping to support people diagnosed with Young Onset Parkinson’s Disease.

Parkinson’s is a degenerative disorder of the central nervous system, which manifests itself in many ways. The most visible symptoms are related to movement: shaking, rigidity, slowness of movement and difficulty with walking, but it can cause bladder and bowel problems, speech and communication difficulties, vision disorders, and can also give rise to psychological problems such as depression. Around one in 500 people suffer from the disease world-wide and there is currently no known cure, although symptoms can be controlled through medication, therapy and, in some cases, surgery.

It is most often seen as a disease affecting people of 50 or older, but this in itself masks a fact: a form of Parkinson’s disease can strike people at a much younger age, and one in twenty of the 8 million Parkinson’s sufferers worldwide is below the age of 40. This variant of Parkinson’s is known as Young Onset Parkinson’s Disease (YOPD).  It differs from older onset Parkinson’s because genetics appears to play a stronger role in YOPD compared to older onset, and the symptoms may differ, together with the response to medication.

Michal J. Fox highlighted the fact that Parkinson’s, often considered an “older persons” disease, can strike at any time, when, at the age of 29, he was diagnosed with Young Onset Parkinson’s disease (image via Photo: Laura Cavanaugh/Film Magic)

One of those under the age of 40 who was struck by the illness was Canadian-born actor, Michael J. Fox, who started showing symptoms as a YOPD sufferer when he was just 29 and filming Doc Hollywood. In 1998, he revealed his condition to the world before establishing the MJFF in 2000, which is dedicated to carrying out research into both combating the symptoms of Parkinson’s disease and to finding a cure. It is now the largest non-profit organisation researching Parkinson’s.

Funding such an aggressive research campaign as run by the Foundation doesn’t come cheap, although they are massively targeted in how they spend their funds. So, to help with fundraising efforts, and in response to Michael’s fans wanting to help with efforts in 2006, the MJFF established Team Fox, a grassroots community fundraising programme. In the eight years since it’s formation, Team Fox has raised over $27 million to help the Foundation’s research through a wide range of public-focused activities and events – which include Second Life, where TeamFox SL is helping to lead the fight.

TeamFox SL was founded by Solas NaGealai. In 1999, well before her involvement in Second Life, she was diagnosed with YOPD. “It was the same time as Michael J Fox disclosed his condition to the public, making my diagnose less tragic and me feeling less alone,” she says of her situation. “The hardest part about being young with Parkinson’s is learning how to juggle a career and a family, along with the life changing illness.”

When first diagnosed, Solas was a full-time fashion designer. However, as the illness progressed, she was forced to leave that career behind. Fortunately, her discovery of Second Life allowed her a way to re-engage in her passion for design, and she founded her own fashion label at Blue Moon Enterprise.

Even so, she wanted to do more, particularly to help with the Foundation’s work. “I knew I could not sit idle,” she says. “To quote Michael, ‘Our challenges don’t define us. Our actions do.’ The strength and optimism I saw in Michael created a spark inside me. With that optimism, I wanted to find a way to give back to the MJFF, to show support and help.”

That way came with the founding of Team Fox. Not only did Solas direct 100% of the proceeds from the sales of her SL designs to Team Fox, she also established TeamFox SL in 2008, the first Team Fox presence to be established in SecondLife, and to be officially sanctioned by the organisation.

Solas wearing one of her own gowns

Solas wearing one of her own gowns

Team Fox SL is dedicated to raising funds for the MJFF, disseminating information about the disease, and providing support for those diagnosed with the illness and their families. In this latter regards, TeamFox SL places special emphasis on providing information on YOPD and helping those diagnosed with YOPD.

This focus is for two reasons; the first is Solas’ own experience as someone diagnosed with YOPD who has trod the route faced by many others diagnosed with the condition and the unique challenges it presents. YOPD sufferers are faced with having to consider how to manage a chronic disease while engaged in career, perhaps raising a family – or even starting a family – and maintaining as high a degree of wellness as possible for as long as possible.

The second reason for the focus on YOPD is the SL demographic itself. YOPD affects people who are 40 or younger; an age range which probably defines the greater portion of SL users, and so it is probable than many of those diagnosed with Parkinson’s and who use Second Life are afflicted by YOPD.

In terms of fundraising, TeamFox SL helps to organise events and activities throughout the year and works closely with other Parkinson’s disease support groups in Second Life, particularly Creations for Parkinson’s, established by Barbie Alchemi, the daughter of Fran Serenade, whose own remarkable story I covered in these pages in 2013, and has also been the subject of The Drax Files: World Makers.

Perhaps one of the most high-profile events co-organised by Solas and co-hosted by TeamFox SL and Creations for Parkinson’s, was the Michael J. Fox Premiere Party, held at Angel Manor in September 2013 to mark the star’s return to television in his own series, and at which a staggering L$425,000 was raised in just three hours through donations and a special silent auction.

Continue reading

Virtual humans: helping us to talk about ourselves

Hi, I’m Ellie. Thanks for coming in today. I was created to talk to people in a safe and secure environment. I’m not a therapist, but I’m here to learn about people, and would love to learn about you. I’ll ask a few questions to get us started…

These are the opening comments from SimSensei, a virtual human application and part of a suite of software tools which may in the future be used to assist in the identification, diagnosis and treatment of mental health issues by engaging people in conversation and by using real-time sensing and recognition of nonverbal behaviours and responses which may be indicative of depression or other disorders.

SimSensei and its companion application, MultiSense, have been developed by the Institute for Creative Technologies (ICT) at the University of Southern California (USC) as part of wide-ranging research into the use of various technologies  – virtual humans, virtual reality, and so on – in a number of fields, including entertainment, healthcare and training.

In 2013, SimSensei and MultiSense underwent an extensive study, the results of which have just been published in a report entitled, It’s only a computer: Virtual humans increase willingness to disclose, which appears in the August 2014 volume of Computers in Human Behaviour.

It is regarded as the first study to present empirical evidence that the use of virtual humans can encourage patients to more honestly and openly disclose information about themselves than might be the case when they are directly addressing another human being, whom they may regard as passing judgement on what they are saying, making them less willing to reveal what information about themselves they feel is embarrassing or which may cause them emotional discomfort if mentioned.

Ellie is the "face" of SimSensei, part of a into the use of virtual tools and software to help address health issues

Ellie is a virtual human, the “face” of SimSensei, designed to interact with human beings in a natural way, and build a conversational rapport with them as a part of a suite of software which might be used to help in the diagnosis of mental ailments

SimSensei presents a patient with a screen-based virtual human, Ellie. The term “virtual human” is used rather than “avatar” because Ellie is driven by a complex AI programme which allows her to engage and interact with people entirely autonomously.

This focus of the software is to make Ellie appear as natural and as human as possible in order for her to build up a rapport with the person who is talking to her. This is achieved by the software responding to subjects using both verbal and nonverbal communication, just like a human being.

During a conversation SimSensei will adjust its reactions to a real person’s verbal and visual cues. Ellie will smile in response to positive displays of emotion – happiness, etc., she will nod encouragement or offer appropriate verbal encouragement during pauses in the flow of conversation, and so on. Rapport is further built by the software being able to engage in small talk and give natural-sounding responses to comments. For example, when one subject mentioned he was from Los Angeles, her response was to say, “Oh! I’m from LA myself!”

SimSensei’s interaction with a patient is driven by MultiSense, which is  technically referred to as “multinodal perception software framework”. MultiSense uses a microphone and camera to capture and map the patient’s verbal and nonverbal responses to SimSensei (facial expression, the direction in which they look, body movements, intonations and hesitations in their speech pattern, etc.). This data is analysed in real-time, and feedback is then given to SimSensei, helping to direct its responses as well as allowing it to detect signs of psychological distress which might be associated with depression disorders or conditions such as post-traumatic stress disorder (PTSD), and react accordingly.

During the ICT study, SimSensei and MultiSense were used to screen 239 people aged between 18 and 65, who were invited to a setting where they could interact with SimSensei as if they were undergoing an interview for admission to a hospital or clinic. On their arrival, some of them were told that they would be interacting with a fully automated piece of software with no human involvement at all, while others were told they’d be interacting with a computer avatar operated by a human. In reality, participants were assigned a fully or semi-automated virtual human entirely at random and without regard to what they were told.

When analysing the results, researchers found that those who believed they were talking purely to a virtual human were far more willing to divulge information and answer personal questions than those who had been told another human being was directing the virtual human. Researchers also noted that those who believed they were addressing a computer programme showed more intense emotional responses in their facial expressions than those who had been told beforehand that there was a human being operating the SimSensei virtual human.

MultiSense tracks the subject's facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject' discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human's reactions and verbal response (shown in the crentral information box outlined in blue)

MultiSense tracks the subject’s facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject’ discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human’s reactions and verbal response (shown in the central information box outlined in blue) – click to enlarge

Feedback was also gathered from the test subjects after their interviews, with those who believed they have been interacting with a computer programme indicating they felt far more comfortable in revealing information about themselves than had they been addressing a human being. By contrast, those who had been told that Ellie was being operated by a human being tended to indicate that they would have been more open in their responses to questions if they had felt they were only addressing a software programme.

Jonathan Gratch is a both the co-author of the study’s report and the Director of Virtual Human Research at ICT. Commenting on the study in a July 2014 article published in the USC News, he said, “We know that developing a rapport and feeling free of judgment are two important factors that affect a person’s willingness to disclose personal information. The virtual character delivered on both these fronts and that is what makes this a particularly valuable tool for obtaining information people might feel sensitive about sharing.”

Jon Gratch leading the USC's ICT research into the use of virtual humans and related technologies in a wide range of areas

Jonathan Gratch leading the USC’s ICT research into the use of virtual humans and related technologies in a wide range of areas

Gratch and his colleagues are keen to stress that SimSensei and MultiSense are not intended to replace trained clinicians in dealing with people’s health issues. However, the results of the ICT’s study suggests that given patients are more willing to disclose information about themselves both directly and through their nonverbal reactions to the software, the use of virtual humans could greatly assist in the diagnosis and treatment process.

In particular, the ICT is already initiating a number of healthcare projects to further explore the potential of virtual humans and the SimSensei / MultiSense framework. These include helping detect signs of depression, the potential to provide healthcare screening services for patients in remote areas, and in improving communication skills in young adults with autism spectrum disorder. Research is also being carried out into the effective use of virtual humans as complex role-playing partners to assist in the training of healthcare professionals, as well as the use of the technology in other training environments.

As noted towards the top of this article, the SimSensei  / MultiSense study is just one aspect of the ICT’s research into the use of a range of virtual technologies, including virtual reality and immersive spaces, for a wide range of actual and potential applications.  I hope to cover some more of their work in future articles.

Related Links

Images via the Institute of Creative Technologies and USC News.

Reflections on a prim: a potential way to create mirrors in SL

Update: just after pushing this out (slightly prematurely, thank you, Mona, for pointing out the error), Gwenners poked me on Twitter and reminded me of the 2006 experiments with reflections and supplied some links to shots from those heady days: and

The ability to have honest-to-goodness mirror surfaces in Second Life which could reflect the world – and avatars – around them has often been asked for over the years, but has tended to be avoided by the Lab as it’s been seen as potentially resource-intensive and not the easiest thing to achieve. As a result people have in the past played around with various means to try to create in-world mirrors.

Zonja Capalini posted a article on using linden water as an avatar mirror in 2011

Zonja Capalini posted an article on using linden water as an avatar mirror as far back as 2009

Zonja Capalini, for example, was perhaps one of the first to blog about using Linden water as a mirror (or at least the first I came across, thanks to Chestnut Rau and Whiskey Monday), and she certainly came up with some interesting results, as shown on the right, and which I tried-out for myself back in 2012.

However, achieving results in this way is also time-consuming and not always practical; you either have to purpose-build a set, or try shoving a jack under a region and hope you can persuade it to tip over on its side…

But there is hope on the horizon that perhaps we may yet see mirrors in SL (and OpenSim).

While it is still very early days,  Zi Ree of the Firestorm team has been poking at things to see what might be achieved, and has had some interesting results using some additional viewer code and a suitable texture.

This has allowed Zi to define a basic way of generating real-time reflections, including those of avatars, on the surface of a prim. The work is still in its early days, and Zi points to the fact that she’s not a rendering pipe expert, so there may be under-the-hood issues which may not have come to light as yet. However, she as produced a number of videos demonstrating the work to date (see the same below), and has raised a JIRA (STORM-2055) which documents the work so far, and self-compilers can use the patch provided in the JIRA if they want to try things for themselves.

Currently, the code only works when the viewer is running in non-deferred rendering (i.e. with the Advanced Lighting Model turned off). This does tend to make the in-world view a little flat, particularly if you’re used to seeing lighting and shadows.

However, having tried a version of the SL viewer with a code applied to it, I can say that it is very easy to create a mirror – all you need is a prim and a texture, make a few tweaks to some debug settings, and a possible relog. The results are quite impressive, as I hope the picture below demonstrates (click to enlarge, if required).

I see you looking at me ...

I see you looking at me …

Performance-wise, my PC and GPU didn’t seem to take too much of a hit – no doubt helped by the fact the mirror effect only works in non-deferred mode at present. Quite what things would be like if this were to be tried with ALM active and shadows and lighting enabled could be a very different story.

As the effect is purely viewer-side, it does run up against the Lab’s “shared experience” policy; not only do you need a viewer with the code to create mirror surfaces, you need a viewer with the code to see the results. People using viewers without the code will just see a transparent prim face (or if the mirror texture is applied to the entire prim, nothing at all while it is 100% transparent).

This means that in order for mirrors of this nature to become the norm in Second Life, then the idea, as offered through this approach, is going to have to be adopted by the Lab. Obviously, to be absolutely ideal, it would also be better if it worked with Advance Lighting Model active as well. Zi additionally notes that some server-side updates are also required in order for a simulator to be able to save things like the reflectiveness of a given mirror surface, etc.

It's all done with mirrors ...

It’s all done with mirrors, y’know … (click to enlarge, if required)

Whether this work could herald the arrival of fully reflective surfaces in-world remains to be seen. It’s not clear how much interest in the idea has been shown by the Lab, but hopefully with the JIRA filed, they’ll take a look at things. There’s little doubt that if such a capability could be made to happen, and without a massive performance or system hit, then it could prove popular with users and add further depth to the platform.

Lab delay introduction of new Skill Gaming Policy

secondlifeOn Wednesday July 9th, Linden Lab announced forthcoming changes to their Skill Gaming policy, which were due  to come into force as from Friday August 1st, 2014. They would bring with them stricter control enforced over the operation of games of skill in Second Life, and see the introduction of a new region type  – The Skill Gaming Region – which will only be accessible to those Second Life users who are of sufficient age and are located in a jurisdiction that Linden Lab permits for this kind of online gaming activity.

However, on Tuesday July 29th, 2014, the Lab issued a blog post stating that the new Skill Gaming policy will not now take effect until Monday September 1st, 2014, pointing to the number of applications received as being the reason for the delay.

The update on the introduction of the revisions to Skill Gaming in Second Life reads in full:

As we recently blogged, we have a new policy for Skill Gaming in Second Life. In short, skill games that offer Linden Dollar payouts will be allowed in Second Life, but each game, its creator, its operator, and the region on which it’s operated must be approved by Linden Lab.

Today, we are changing the date that the changes described in our previous blog post go into effect. Instead of starting on August 1, the updated Skill Gaming Policy will go into effect on September 1, 2014. The original blog post and the FAQs will also be updated to reflect this new deadline.

Since our original announcement, we’ve received many applications from Second Life users who want to become approved skill game creators and operators. By moving the date back, we’ll be able to process a larger number of applications and also offer creators more time to make necessary changes to their games.

If you would like to apply to become an approved skill games creator and/or operator, you can do so through Echosign.

Infrastructure support for the new Skill Gaming regions has already been deployed to the main grid as a part of the server deployments of weeks 28 and 29.