Category Archives: News

Lab issues Skill Gaming reminder

secondlifeLinden Lab has issued a further reminder that the new Skill Gaming policy comes into effect as of Monday September 1st, and that it will be enforced. This means that as of that date, all games of skill operating in Second Life must:

  • Have been created by a  skill games creator approved by Linden Lab
  • Be operated by a skill games operator approved by Linden Lab
  • Be located on a Skill Gaming region operated by the Skill Games operator.

In case there are any wondering what might be classified as a skill game, and thus falls under the above requirements, the Skill Gaming policy provides the following definition:

A game, implemented through an Inworld object: 1) whose outcome is determined by skill and is not contingent, in whole or in material part, upon chance; 2) requires or permits the payment of Linden Dollars to play; 3) provides a payout in Linden Dollars; and 4) is legally authorised by applicable United States and international law.  

The policy also notes that, “‘Skill Games’ are not intended to include and shall not include ‘gambling’ as defined by applicable United States and international law.” Gambling is, and remains, against the Second Life Terms of Service.

The new policy means that as from September 1st, 2014, anyone wishing to play games of skill also must meet certain criteria, which the Lab again defines as follows:

Should you wish to participate in Skill Gaming in Second Life, you represent and agree that you: (i) are at least nineteen (19) years of age; (ii) have the legal authority to agree to this Skill Gaming Policy; (iii) reside in, and are accessing a Skill Gaming Region from, a jurisdiction in which participation in Skill Gaming is legally authorized; and (iv) are of legal age to participate in Skill Gaming in your jurisdiction.

Additionally, those wishing to play gamers of skill must, “establish and maintain a Second Life account with accurate, current and complete information about yourself, including a valid payment method.”

The official reminder from the Lab further makes things clear:

Remember: if you are not an approved* Creator or Operator, you must cease the creation, distribution, and operation of skill games (as defined in the Skill Gaming Policy) by September 1, 2014. So if you haven’t already removed any unapproved skill games from your Marketplace shop, for example, or haven’t yet ceased operating them inworld, now is the time to do so. From that date forward, operating and/or creating skill games with L$ payouts, among other criteria as specified in the Skill Gaming Policy, without Linden Lab approval (and/or outside of Skill Gaming Regions) will be subject to enforcement measures.

If you live in a jurisdiction where skill gaming is permitted and you plan on playing these games in Skill Gaming Regions in Second Life, you should not need to do anything differently. However, adding payment information on file now is a good way to help ensure you’re able to play as soon as Skill Gaming Regions are live.

*As noted in the FAQ, creators and operators whose applications are under review at the deadline may continue to operate skill games while their applications are reviewed, provided that they have submitted all required documentation and continue to promptly respond to any inquiries from Linden Lab.

As I recently reported, a number of approved operators and games have appeared on the official Skill Gaming Approved Participants wiki page, but one of the concerns expressed by potential creators and operators is the remaining lack of clarity around aspect of the new policy. for example, the Lab still had yet to give any indication of the likely quarterly fees which are to be levied, and this may still be causing people to hesitate in submitting an application as a creator and / or operator of skill games.

However, this doesn’t escape that fact that all operators and creators of skill games will have to be in compliance with the policy from Monday September 1st – and for those who have not yet submitted their application, that means ceasing creation, distribution and operation of skill games, as noted in the Lab’s blog post.

Related Links

 

 

SL viewer to get unified snapshot floater

secondlifeUpdate, August 26th: The unified snapshot floater is now available in a release candidate viewer, version 3.7.15.293295

Niran V Dean is familiar to many as the creator of the Black Dragon viewer, and before that, Niran’s Viewer. Both viewers have been innovative in their approach to UI design and presentation, and both have been the subject of reviews in this blog over the years, with Black Dragon still reviewed as and when versions are released.

Once of the UI updates Niran recently implemented in Black Dragon was a more unified approach to the various picture-taking floaters which are becoming increasing available across many viewers. There’s the original snapshot floater, and there are the Twitter, Flickr and Facebook floaters offered through the Lab’s SL Share updates to the official viewer, which are now also available in a number of TPVs.

In Black Dragon, Niran redesigned the basic snapshot floater, offering a much improved preview screen and buttons which not only provide access to the familiar Save to Disk, Save to Inventory, etc., options, but which also provide access to the Flickr, Twitter, and Facebook panels as well.

He also submitted to the code to Linden Lab, who have approved it, and it is currently working its way through their QA and testing cycle and should be appearing in a flavour of the official viewer soon (see STORM-2040).

A test build of the viewer with the new, more unified approach is available, and I took it for a quick spin to try-out the snapshot-related changes. Note it is a work-in-progress so some things may yet be subject to change between now and release.

First off, the snapshot floater is still accessed via the familiar Snapshot button, so there’s no loking for a new label or icon. The Twitter, Flickr and Facebook floaters and buttons are also still available (so if one or other of them is your preferred method of taking pictures, you can still open them without having to worry about going an extra step or two through the snapshot floater).

Opening the new snapshot floater immediately reveals the extent of Niran’s overhaul – and as with Black Dragon, I like it a lot.

The new snapshot floater by Niran V Dean: note the button options for Flickr, Twitter and Facebook uploads

The new snapshot floater by Niran V Dean: note the button options for Flickr, Twitter and Facebook uploads

The increased size of the preview panel is immediately apparent, and might at first seem very obtrusive. However, when not required, it can be nicely hidden away by clicking the << on the top left of the floater next to the Refresh button, allowing a more unobstructed in-world view when framing an image (you can also still minimise the floater if you prefer).

Beneath the Refresh button are the familiar snapshot floater options to include the interface and HUDs in a snapshot, the colour drop down, etc., and – importantly – the SL Share 2 filter drop down for post-processing images. The placing of the latter is important, as it is the first clue that filters can, with this update, be applied to snaps saved to inventory or disk or e-mailed or – as is liable to prove popular – uploaded to the profile feed.

With the new snapshot floater, you will be able to add filters to the snaps you save to disk or inventory, or which you e-mail or upload to your profile feed

With the new snapshot floater, you will be able to add filters to the snaps you save to disk or inventory, or which you e-mail or upload to your profile feed – here is a snap being prepared to save to disk with the lens flare filter added

Below these options are the familiar buttons allowing you to save a snapshot to disk, inventory, your feed or to e-mail it to someone. click each of these opens their individual options, which overwrite the buttons themselves – to return to them, simply click the Cancel button. Saving a snapshot will refresh the buttons automatically.

Within these buttons are those for uploading to Flickr, Twitter or Facebook. These buttons work slightly differently, as clicking any one of them will close the snapshot floater and open the required application upload floater.

While this may seem inconvenient over having everything in the one floater, it actually makes sense. For one thing, trying to re-code everything into an all-in-one floater would be a fairly non-trivial task; particularly as Twitter, Flickr and Facebook have their own individual authentication requirements and individual upload options (such as sending a text message with a picture uploaded to Twitter, and the ability to check your friends on Facebook. Also, and as mentioned earlier, keeping the floaters for Flickr, Twitter and Facebook separate means they can continue to be accessed directly by people who use them in preference to the snapshot floater.

However, this latter point doesn’t mean they’ve been left untouched. Niran has cleaned-up much of their respective layouts and in doing so has reduced their screen footprints. The results are three floaters that are all rather more pleasing to the eye.

Niran's revised Facebook floater, left - note the new Connect button, removing a need for an extra tab; and the orginal floater  on the right

Niran’s revised Facebook floater, left – note the new Connect button, removing the need for an extra tab; and the original floater on the right

All told, these are a sweet set of updates which make a lot of sense. It may be a while longer before they surface in a viewer; I assume they’ll likely appear in a snowstorm update, rather than a dedicated viewer of their own, but that’s just my guess. Either way, they’re something to look forward to,

Kudos to Niran for the work in putting this together, and to Oz and the Lab for taking the code on and adding it to the viewer.

TeamFox SL: in the front line of the fight against Parkinson’s disease

There have been a number of reports in the media of late about a potentially significant breakthrough in the fight against Parkinson’s disease.  These reports, which have appeared on the pages of the Parkinson’s UK website, and through agencies such as Time Warner Cable News, are about a new vaccine which might slow, or even stop, the progression of the disease.

The vaccine is being developed in Austria with partial funding from the Michael J. Fox Foundation for Parkinson’s Research (referred to simply as the MJFF), and the publication of the reports on the work suggested an opportunity for me to write about the ongoing work of TeamFox SL here in Second Life in the battle to find a lasting cure for Parkinson’s disease, and in helping to support people diagnosed with Young Onset Parkinson’s Disease.

Parkinson’s is a degenerative disorder of the central nervous system, which manifests itself in many ways. The most visible symptoms are related to movement: shaking, rigidity, slowness of movement and difficulty with walking, but it can cause bladder and bowel problems, speech and communication difficulties, vision disorders, and can also give rise to psychological problems such as depression. Around one in 500 people suffer from the disease world-wide and there is currently no known cure, although symptoms can be controlled through medication, therapy and, in some cases, surgery.

It is most often seen as a disease affecting people of 50 or older, but this in itself masks a fact: a form of Parkinson’s disease can strike people at a much younger age, and one in twenty of the 8 million Parkinson’s sufferers worldwide is below the age of 40. This variant of Parkinson’s is known as Young Onset Parkinson’s Disease (YOPD).  It differs from older onset Parkinson’s because genetics appears to play a stronger role in YOPD compared to older onset, and the symptoms may differ, together with the response to medication.

Michal J. Fox highlighted the fact that Parkinson’s, often considered an “older persons” disease, can strike at any time, when, at the age of 29, he was diagnosed with Young Onset Parkinson’s disease (image via Photo: Laura Cavanaugh/Film Magic)

One of those under the age of 40 who was struck by the illness was Canadian-born actor, Michael J. Fox, who started showing symptoms as a YOPD sufferer when he was just 29 and filming Doc Hollywood. In 1998, he revealed his condition to the world before establishing the MJFF in 2000, which is dedicated to carrying out research into both combating the symptoms of Parkinson’s disease and to finding a cure. It is now the largest non-profit organisation researching Parkinson’s.

Funding such an aggressive research campaign as run by the Foundation doesn’t come cheap, although they are massively targeted in how they spend their funds. So, to help with fundraising efforts, and in response to Michael’s fans wanting to help with efforts in 2006, the MJFF established Team Fox, a grassroots community fundraising programme. In the eight years since it’s formation, Team Fox has raised over $27 million to help the Foundation’s research through a wide range of public-focused activities and events – which include Second Life, where TeamFox SL is helping to lead the fight.

TeamFox SL was founded by Solas NaGealai. In 1999, well before her involvement in Second Life, she was diagnosed with YOPD. “It was the same time as Michael J Fox disclosed his condition to the public, making my diagnose less tragic and me feeling less alone,” she says of her situation. “The hardest part about being young with Parkinson’s is learning how to juggle a career and a family, along with the life changing illness.”

When first diagnosed, Solas was a full-time fashion designer. However, as the illness progressed, she was forced to leave that career behind. Fortunately, her discovery of Second Life allowed her a way to re-engage in her passion for design, and she founded her own fashion label at Blue Moon Enterprise.

Even so, she wanted to do more, particularly to help with the Foundation’s work. “I knew I could not sit idle,” she says. “To quote Michael, ‘Our challenges don’t define us. Our actions do.’ The strength and optimism I saw in Michael created a spark inside me. With that optimism, I wanted to find a way to give back to the MJFF, to show support and help.”

That way came with the founding of Team Fox. Not only did Solas direct 100% of the proceeds from the sales of her SL designs to Team Fox, she also established TeamFox SL in 2008, the first Team Fox presence to be established in SecondLife, and to be officially sanctioned by the organisation.

Solas wearing one of her own gowns

Solas wearing one of her own gowns

Team Fox SL is dedicated to raising funds for the MJFF, disseminating information about the disease, and providing support for those diagnosed with the illness and their families. In this latter regards, TeamFox SL places special emphasis on providing information on YOPD and helping those diagnosed with YOPD.

This focus is for two reasons; the first is Solas’ own experience as someone diagnosed with YOPD who has trod the route faced by many others diagnosed with the condition and the unique challenges it presents. YOPD sufferers are faced with having to consider how to manage a chronic disease while engaged in career, perhaps raising a family – or even starting a family – and maintaining as high a degree of wellness as possible for as long as possible.

The second reason for the focus on YOPD is the SL demographic itself. YOPD affects people who are 40 or younger; an age range which probably defines the greater portion of SL users, and so it is probable than many of those diagnosed with Parkinson’s and who use Second Life are afflicted by YOPD.

In terms of fundraising, TeamFox SL helps to organise events and activities throughout the year and works closely with other Parkinson’s disease support groups in Second Life, particularly Creations for Parkinson’s, established by Barbie Alchemi, the daughter of Fran Serenade, whose own remarkable story I covered in these pages in 2013, and has also been the subject of The Drax Files: World Makers.

Perhaps one of the most high-profile events co-organised by Solas and co-hosted by TeamFox SL and Creations for Parkinson’s, was the Michael J. Fox Premiere Party, held at Angel Manor in September 2013 to mark the star’s return to television in his own series, and at which a staggering L$425,000 was raised in just three hours through donations and a special silent auction.

Continue reading

Virtual humans: helping us to talk about ourselves

Hi, I’m Ellie. Thanks for coming in today. I was created to talk to people in a safe and secure environment. I’m not a therapist, but I’m here to learn about people, and would love to learn about you. I’ll ask a few questions to get us started…

These are the opening comments from SimSensei, a virtual human application and part of a suite of software tools which may in the future be used to assist in the identification, diagnosis and treatment of mental health issues by engaging people in conversation and by using real-time sensing and recognition of nonverbal behaviours and responses which may be indicative of depression or other disorders.

SimSensei and its companion application, MultiSense, have been developed by the Institute for Creative Technologies (ICT) at the University of Southern California (USC) as part of wide-ranging research into the use of various technologies  – virtual humans, virtual reality, and so on – in a number of fields, including entertainment, healthcare and training.

In 2013, SimSensei and MultiSense underwent an extensive study, the results of which have just been published in a report entitled, It’s only a computer: Virtual humans increase willingness to disclose, which appears in the August 2014 volume of Computers in Human Behaviour.

It is regarded as the first study to present empirical evidence that the use of virtual humans can encourage patients to more honestly and openly disclose information about themselves than might be the case when they are directly addressing another human being, whom they may regard as passing judgement on what they are saying, making them less willing to reveal what information about themselves they feel is embarrassing or which may cause them emotional discomfort if mentioned.

Ellie is the "face" of SimSensei, part of a into the use of virtual tools and software to help address health issues

Ellie is a virtual human, the “face” of SimSensei, designed to interact with human beings in a natural way, and build a conversational rapport with them as a part of a suite of software which might be used to help in the diagnosis of mental ailments

SimSensei presents a patient with a screen-based virtual human, Ellie. The term “virtual human” is used rather than “avatar” because Ellie is driven by a complex AI programme which allows her to engage and interact with people entirely autonomously.

This focus of the software is to make Ellie appear as natural and as human as possible in order for her to build up a rapport with the person who is talking to her. This is achieved by the software responding to subjects using both verbal and nonverbal communication, just like a human being.

During a conversation SimSensei will adjust its reactions to a real person’s verbal and visual cues. Ellie will smile in response to positive displays of emotion – happiness, etc., she will nod encouragement or offer appropriate verbal encouragement during pauses in the flow of conversation, and so on. Rapport is further built by the software being able to engage in small talk and give natural-sounding responses to comments. For example, when one subject mentioned he was from Los Angeles, her response was to say, “Oh! I’m from LA myself!”

SimSensei’s interaction with a patient is driven by MultiSense, which is  technically referred to as “multinodal perception software framework”. MultiSense uses a microphone and camera to capture and map the patient’s verbal and nonverbal responses to SimSensei (facial expression, the direction in which they look, body movements, intonations and hesitations in their speech pattern, etc.). This data is analysed in real-time, and feedback is then given to SimSensei, helping to direct its responses as well as allowing it to detect signs of psychological distress which might be associated with depression disorders or conditions such as post-traumatic stress disorder (PTSD), and react accordingly.

During the ICT study, SimSensei and MultiSense were used to screen 239 people aged between 18 and 65, who were invited to a setting where they could interact with SimSensei as if they were undergoing an interview for admission to a hospital or clinic. On their arrival, some of them were told that they would be interacting with a fully automated piece of software with no human involvement at all, while others were told they’d be interacting with a computer avatar operated by a human. In reality, participants were assigned a fully or semi-automated virtual human entirely at random and without regard to what they were told.

When analysing the results, researchers found that those who believed they were talking purely to a virtual human were far more willing to divulge information and answer personal questions than those who had been told another human being was directing the virtual human. Researchers also noted that those who believed they were addressing a computer programme showed more intense emotional responses in their facial expressions than those who had been told beforehand that there was a human being operating the SimSensei virtual human.

MultiSense tracks the subject's facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject' discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human's reactions and verbal response (shown in the crentral information box outlined in blue)

MultiSense tracks the subject’s facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject’ discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human’s reactions and verbal response (shown in the central information box outlined in blue) – click to enlarge

Feedback was also gathered from the test subjects after their interviews, with those who believed they have been interacting with a computer programme indicating they felt far more comfortable in revealing information about themselves than had they been addressing a human being. By contrast, those who had been told that Ellie was being operated by a human being tended to indicate that they would have been more open in their responses to questions if they had felt they were only addressing a software programme.

Jonathan Gratch is a both the co-author of the study’s report and the Director of Virtual Human Research at ICT. Commenting on the study in a July 2014 article published in the USC News, he said, “We know that developing a rapport and feeling free of judgment are two important factors that affect a person’s willingness to disclose personal information. The virtual character delivered on both these fronts and that is what makes this a particularly valuable tool for obtaining information people might feel sensitive about sharing.”

Jon Gratch leading the USC's ICT research into the use of virtual humans and related technologies in a wide range of areas

Jonathan Gratch leading the USC’s ICT research into the use of virtual humans and related technologies in a wide range of areas

Gratch and his colleagues are keen to stress that SimSensei and MultiSense are not intended to replace trained clinicians in dealing with people’s health issues. However, the results of the ICT’s study suggests that given patients are more willing to disclose information about themselves both directly and through their nonverbal reactions to the software, the use of virtual humans could greatly assist in the diagnosis and treatment process.

In particular, the ICT is already initiating a number of healthcare projects to further explore the potential of virtual humans and the SimSensei / MultiSense framework. These include helping detect signs of depression, the potential to provide healthcare screening services for patients in remote areas, and in improving communication skills in young adults with autism spectrum disorder. Research is also being carried out into the effective use of virtual humans as complex role-playing partners to assist in the training of healthcare professionals, as well as the use of the technology in other training environments.

As noted towards the top of this article, the SimSensei  / MultiSense study is just one aspect of the ICT’s research into the use of a range of virtual technologies, including virtual reality and immersive spaces, for a wide range of actual and potential applications.  I hope to cover some more of their work in future articles.

Related Links

Images via the Institute of Creative Technologies and USC News.