London launches world’s first contactless payment scheme for street performers

Charlotte_Campbell_London

Here’s a casualty of the cashless society you might not have previously thought of: the humble street performer. After all, if more of us are paying our way with smartphones and contactless cards, how can we give spare change to musicians on the subway? London has one solution: a new scheme that outfits performers with contactless payment terminals.

The project was launched this weekend by the city’s mayor, Sadiq Khan, and is a collaboration with Busk In London (a professional body for buskers) and the Swedish payments firm iZettle (which was bought this month by PayPal for $2.2 billion). A select few performers have been testing iZettle’s contactless readers on the streets for the past few weeks, and Khan now says the scheme will be rolled out across London’s 32 boroughs.

Charlotte Campbell, a full-time street performer who was part of the trial, told BBC News that the new tech “had a significant impact on contributions.” Said Campbell: “More people than ever tap-to-donate whilst I sing, and often, when one person does, another follows.”

The readers need to be connected to a smartphone or tablet, and accept payments of fixed amounts (set by the individual performer). They work with contactless cards, phones, and even smartwatches. There’s no detail yet on how many readers will be provided to London’s street performers, or whether they will have to pay for the readers themselves.

Although individuals do sometimes set up their own contactless payment systems (and in China, it’s not uncommon to see street performers and beggars use QR codes to solicit mobile tips), this seems to be the first scheme of its kind spearheaded by a city authority.

Microsoft Buys Conversational AI Company Semantic Machines

SemanticMachines

In a blog post, Microsoft Corporate Vice President and Chief Technology Officer of AI & Research David Ku announced the acquisition of Berkeley, California-based conversational AI company Semantic Machines. The natural language processing technology developed by Semantic Machines will be integrated into Microsoft’s products like Cortana and the Azure Bot Service.

On its website, Semantic Machines says that existing natural language systems such as Apple Siri, Microsoft Cortana and Google Now only understands commands, but not conversations. However, Semantic Machines' technology understands conversations rather than just commands. Some of the most typical commands that digital assistants can handle today include weather reports, music controls, setting up timers and creating reminders. “For rich and effective communication, intelligent assistants need to be able to have a natural dialogue instead of just responding to commands,” said Ku.

Microsoft turns SharePoint into the simplest VR creation tool yet

SharePoint_spaces

Microsoft is sticking with its pragmatic approach to VR with SharePoint spaces, a new addition to its collaboration platform that lets you quickly build and view Mixed Reality experiences. It's a lot like how PowerPoint made it easy for anyone to create business presentations. Sharepoint spaces features templates for things like a gallery of 3D models or 360-degree videos, all of which are viewable in Mixed Reality headsets (or any browser that supports WebVR). While they're certainly not complex virtual environments, they're still immersive enough to be used for employee training, or as a quick virtual catalog for your customers.

"Until now, it has been prohibitively complex and costly to develop customized MR apps to address these and other business scenarios," wrote Jeff Teper, Microsoft's corporate VP for OneDrive, SharePoint and Office, in a blog post today. "SharePoint spaces empower creators to build immersive experiences with point-and-click simplicity."

Google AI can make calls for you

During the on-stage demonstration, Google played calls to a number of businesses including a hair salon and a Chinese restaurant. At no point did either of the people on the other end of the line appear to suspect that the entity they were interacting with was a bot. And how could they when the Assistant would even throw in random "ums", "ahhs" and other verbal fillers people use when they're in the middle of a thought? According to the company, it's already generated hundreds of similar interactions over the course of the technology's development.

Advertsing

125X125_06

MonthList

CommentList