Microsoft’s Skype Lite recently received an update, the main calling card of which its integration with Aadhaar. The intent seems to be to enable Skype’s users verify each other’s identities in the virtual world and to make interactions be more secure.
For the benefit of our international readers, Aadhaar is an individual identification number issued by the Unique Identification Authority of India (UIDAI). It serves as a proof of identity anywhere in the country, akin to the Social Security Number in the U.S.
“Aadhaar is considered to be the world’s largest national identification number project and allows users in India to communicate with government, business, and others with a higher level of trust and lower potential for fraud. With the latest version of Skype Lite, Aadhaar integration can be used to verify user identities online, helping them communicate more securely with others“, the company shared in a statement.
Skype Lite, as the name suggests, is a “light” version of the prominent video calling application. The custom-build for India also includes an advanced algorithm that is optimised to consume less Internet Data during video communication. The functionality is particularly important as a considerable portion of the users rely on Mobile Data to access the application; and where mobile coverage quality is not yet a fully reliable.
Available only for Android users currently, the app has seen close to five million downloads on the Google Play Store in just four months since its launch in February 2017,
The integration with Aadhaar comes as a bid to enhance the security on the app and is perhaps an implied nod in the direction of payment-integration within Skype in the future.
To confirm a user’s identity using Aadhaar, you need to click on “Verify Aadhaar Identity“, enter your 12-digit Aadhaar number and then authenticate with a one-time password sent via an SMS. Once your identity has been validated, you can choose to share pre-selected Aadhaar information with the other person, to assure them of your identity.
For example, you may wish to make a Skype Lite call to an important business client or government representative. By using Aadhaar, both parties can verify their identity at the beginning of the call to prevent impersonation fraud, allowing people to conduct business via digital means with more trust.
Microsoft has also assured users that it does not save or pry into your Aadhaar information shared by the users, stating that all such personal information, quite like your conversation, is encrypted.
In addition to the Aadhaar integration, the app comes with other India-centric features:
From the looks of it, Microsoft is leaning into the Indian market with a purpose, and customizing their offerings to become attractive to Indians by responding to their implicitly psychological and usability demands. This is a move that might help Microsoft gain some much-needed traction in India. Their computer software is used widely, however in pirated versions, so even though they are a household name in the country, they don’t quite get to take the popularity to the bank. They might, now.
Addressing a growing concern, the panacea of user privacy comes in the form of a new application.
For those of you who break into a hot (or cold) sweat about all the traces your internet explorations are leaving behind, you may finally have something that could let you get off the carousel.
Swedish developers Wille Dahlbo and Linus Unnebäck created Deseat.me, an application that offers a way to wipe your entire existence off the internet in a few clicks.
When one thinks about it, it seems impossible to remember the number of websites or apps we have signed on to across our travails on the internet. From that news website, you signed up for to read one single article and never went back, to the app that you used to use once upon a time, it all starts to form a clutter pile that we can never really quite get rid of.
The spam and junk mail that we get in our updates and social tabs on the email itself are usually in thousands – all a sign of the fact that somewhere someone is reading the ‘cookies’ (the digital avatar of the mythical Crystal Ball), and using that data for intents and purposes only known to them.
Well, no need to sweat it anymore!
Deseat.me is the solution to the clutter problem. It finds all your available data and asks you what you want to do with each element of the pile.
You start with signing in with your email address on to their website, which, of course, is Deseat.me. Currently, Deseat.me allows you to sign in with Google, Outlook, Hotmail, MSN, and Live IDs.
Say you used your Google account to sign into various apps, news websites, social media platforms, and all sorts of other websites. Once you sign in, Deseat.me scans all the apps and services you have made an account for and provides a list of all of them. Depending on how long the list is, this can take you anywhere between five minutes to about an hour.
After it is done putting the data together, you can then choose to stay on certain websites, and delete your existence off of others.
Deseat.me shows you the name of the website on the left and the option to keep or delete it on the right. You can also choose to delete your existence off of all websites that the app collates, or choose to selectively retain accounts on certain websites.
Also, you don’t have to worry about sharing the details of your accounts with Deseat.me. The service uses OAuth 2.0 protocol, which is an industry standard protocol used for conveying authorization decisions, they do not have access to your account information either.
“Privacy and data security is something we regard as extremely important“, says their website.
The developers have also highlighted that the program runs on the user’s computer, rather than their servers, which is another element that ensures the user’s privacy.
But at the end of that, you are free from some of the clutter in your digital life, or, the entirety of your digital life if that is what you’d prefer.
Facebook knows where you are, what you’re doing, all day long. Not only can Facebook keep up with you through the social networking platform it can reach in for information through Whatsapp, FB Messenger, or Instagram.
It’s not the BIg Brother, but given all that it knows about you, it might be close enough.
But for once, we’re saying this in a good way.
Facebook has close to 2 billion users around the world, and that is a massive number – it translates to one in every four people on God’s Green Earth uses one (or more) of Facebook’s platforms.
This kind of reach has bestowed Facebook with some special superpowers in the form of the amount of data it has, and that is a power that Facebook could use, to do a lot.
The social networking giant has decided to put to use the superpowers, this time for social good.
Let’s first talk about one of Facebook’s projects that it’s been piloting on the back end.
Realising it’s own potential to be of assistance during disasters, and it’s ability to aid crisis management efforts, Facebook has partnered with UNICEF, the International Federation of the Red Cross and Red Crescent Societies, and the World Food Programme, aims to use maps to improve how communities receive help after disasters.
Facebook has put a system in place to collate user data post an incident, into three types of maps:
The first are the Location Density maps, which show where people are physically located before, during and after a crisis.
The second are Movement maps, which show patterns of movement over a period of hours. These can be used to help concerned organizations understand where people are moving to, in the crucial hours after an incident and thus enable them to direct help better.
The third are Safety maps, which display the data as per people checking in safely (yep, the same kind of checking-in that kept showing up on our screen last year after the Nepal earthquake), enabling the authorities to judiciously direct help by telling them where help is not likely to be needed.
This kind of information has the potential to change the way relief is provided in the aftermath of an incident.
“We might know where the house is, but we don’t know where the people are. Our first reaction may be to go to where the devastation happened, but maybe most people are 10 miles away, staying with families when they reported they were safe. So the place to go may be where they are“, said Dale Kunce, global lead for Information Communication Technology and Analytics for the American Red Cross.
Facebook also offers Community Help in conjunction with Safety Check, which is a feature that lets people find or offer food, shelter, transportation, and other forms of aid.
Facebook, in their announcement post, showcased how the data will reflect on the maps. They showed what the maps looked like during the March floods in Peru and demonstrated how such information could provide crucial help to authorities during the aftermath of a disaster.
In fact, it is being said that Facebook leveraged it’s tools and provided assistance during the London Tower Inferno earlier today, however it’s too soon for us to be able corroborate that just yet.
This is not the first time that the social networking platform has been leveraged during disaster management.
In the past, disaster response professionals have relied on Facebook Live and other video tools, in order to gather information to help understand where and how to allocate resources.
It is clear that Facebook has a lot of data on users that sign up for it. With initiatives of this kind, Facebook is putting this data to constructive use, for social good. Facebook intends to roll out these maps for the use of governmental and other aid organisations, soon.
Facebook has also begun deploying another tool, for a less emergent, but no less critical task.
Facebook has begun deploying chatbots that talk about mental health in its Messenger.
It honestly seems like an odd conversation to have with a chatbot, but Woebot explains it in the first few lines as it appears on your screen.
“So here’s how I work, I’m going to ask you about your mood and as I get to know you, I’ll teach you some good stuff“.
Woebot is a chatbot powered by artificial intelligence to improve your mood and even alleviate symptoms of depression.
The chatbot represents the very thin and risky line that Facebook seems to be walking, the line of innovation happening at the intersection of mental health care and technology. However, Facebook has been very clear; Woebot is not a replacement for therapy, or a therapist. It is a support mechanism and only that, at this point.
Woebot also has its limitations, and it is prompt to admit them, by recommending that you seek a “higher form of care” when it has run into things it can’t deal with.
Woechat is available at a monthly subscription of USD 39.
Another chatbot that Facebook has is Joe, which tracks your emotions and provides related tips on feeling better. Joe is also AI powered and available on the Messenger, but unlike Woechat, which is more advanced, Joe is a free service.
There is still a long way to go before mental health and therapy can be meaningfully conducted by robots, but the fact that Facebook is bringing something of the kind to the millions and millions of users it has is definitely a praiseworthy step.
Facebook has also been raising other important issues this month, with the celebration of the Pride Month, with the LGBTQ community.
Facebook is doing so with a number of new features, such as being able to now adorn your profile picture with a rainbow flag, or using the new rainbow flag reaction emoji you can select for posts and comments.
You can also choose from several masks and frames by selecting the magic wand tool on Facebook’s camera.
That’s not all. You can also start a fundraiser (a new feature that Facebook rolled out recently) to donate and collect money for to your favorite LGBTQ cause.
While these new features might not be the most that Facebook could have done, the idea of a Pride Month is certainly raising the issue on a global platform.
“We are proud to support the LGBTQ community, and while more work still remains, we are eager to be active partners going forward“, Facebook’s Newsroom post said.
Korean Scientists Develop Super Thin OLED Displays Using Graphene Electrodes
The world’s first prototype of a graphene-based transparent OLED screen was demonstrated at the Society for Information Display’s annual symposium and trade show.
An outcome of a partnership between two pioneering institutes, the Electronics and Telecommunications Research Institute (ETRI) and Hanwha Techwin of South Korea, the transparent electrodes for OLED displays are super-thin, flexible and made from graphene (the current darling of the innovation world).
So what’s the big deal about it?
Traditionally indium tin oxide (ITO) is used for making OLED panels. But ITO is fragile and difficult to use as they break easily. We had explained the material excellently in an earlier article covering an invention by some physicists at Sussex, England. You should read that to know more about the material and why it’s begging to be replaced.
to continue the current story, the advent of flexible displays is dispelling the possibility of using indium tin oxide and thus, manufacturers and scientists alike, have been on a lookout for an alternative. Some considered materials like plastics, however most arrived at the same conclusion – the biggest potential lies in Graphene.
Graphene based technology is a giant leap forward and Korea’s ETRI and Hanwha Techwin are pioneering the technology and techniques needed to harness this dynamic material.
What is so great about Graphene?
Well, it’s an “atomic scale two-dimensional hexagonal lattice” which is as thin as you can get at moment, highly flexible, stronger than steel (literally) and a great conductor of heat and electricity.
The research for the project began in 2012 supported by trade ministry of the country. Several other researchers too, tried their magic to produce something on similar lines, but ETRI + Hanwha Techwin became the first to crack the nut.
The team managed to produce world’s largest OLED panel measuring 370×470 millimetres with an approximate thickness of 5 nanometers. Yup, that last one’s right – 5 nanometers.
Excited suddenly? So were we!
The tech has a lot of applications – most obviously in wearables and flexible devices (textiles too!) Just imagine what this new technology could do in conjunction with the minor miracle that scientists in the North Carolina State University have recently created – touch sensitive cloth!
I’m sure you can imagine how large the demand for OLED displays is – you can easily estimate it when you read of a company like Apple placing an order of 70 (or was it 100) million units of OLED panels from Samsung just a little while ago!
So there is a huge market for the OLED and it will be interesting to see how this pathbreaking technology will change the game.
Before I end, I’d be doing another brand a great disservice if I don’t mention this. You just cannot say ‘OLED’, without mentioning LG Display. One of the foremost explorer of display technology in the world, the company is at present engrossed in the development of foldable display panels. It’s already wowed the world with their “rollable” OLED panels when it displayed the tech at the Consumer Electronics Show in January – this pioneer is currently working on development of foldable display panels and is in an obvious race with Samsung to hit the secret sauce first.
A spokesman at LG Display commented, “If commercialized soon, the graphene electrode technology would help the industry achieve fordable panels significantly”. So, guess whose going to be paying ETRI and Hanwha Techwin a visit?
Dear Reader: if there’s one topic you should be reading in the realm of Technology, it should be Graphene – because it’s going to take over your life one of these months. Go ahead – search for Graphene in our Search Bar – we have plenty of stuff to help you learn all about it!
Xiaomi Mi 6 Announced: Snapdragon 835, Dual Cameras And Lots Of Hope
Xiaomi has finally announced its much-awaited flagship for 2017 – the Xiaomi Mi 6. Having skipped the telecom industry’s largest annual event, the Mobile World Conference this year (where it had actually announced the Mi 5 last year), Xiaomi made 2017’s announcement at an event in Beijing.
The device inherits its 5.15 inch display from Mi 5, and comes with 6 GB RAM – the most a Xiaomi device has had so far. The over-done RAM is coupled with the latest, and most potent processor, Qualcomm’s Snapdragon 835, which has so far only been seen in Samsung Galaxy S8 and the Galaxy S8+, and will be seen on the Sony Xperia XZ Premium (towards the end of this summer).
The phone comes with a 3,350 mAh battery that some may consider a little under par – however considering the Mi 6’s carrying a smaller screen than most other devices and the fact that the Mi 6 will launch on the most streamlined Android ever – the Android 7 Nougat – that much juice should suffice for one day.
Other notable features include dual speakers for stereo audio, improved 2.2 dual Wi-Fi technology, and new screen options that include a new night display which reduces the blue component of light.
What could prove to be a highlight for the phone is the fingerprint sensor, which is supposed to be under the glass, which could prove to be a trendsetter. Even though Xiaomi did bring this in their Mi 5s last September, presently no other brand in the market has an under-glass fingerprint sensor.
A few biggies have patented the related technology, including Apple, so we might see more of it in the market soon.
The most noticeable thing about this smartphone though, is that it shares plenty of similarities with Apple’s iPhone 7. The most obvious of these is the lack of a headphone jack on the device. Now here if Xiaomi is following a market trend, or simply doing what makes technological sense, is up for debate.
The device also includes a 12 megapixel dual rear camera, quite like the iPhone 7, but that is not all; it also borrowed a bokeh-style photography option, alongside 10x digital zoom, 2x lossless zoom, and optical image stabilization (OIS) technology. The device has an 8 megapixel front camera which should do well for selfies under most conditions.
Xiaomi does indeed share a lot with Apple on this device, but what it does not share is the price point. The variant featuring 64 GB of storage is priced at USD 360, the 128 GB option at USD 420, and the special Ceramic edition is priced at USD 435.
While all these are certainly far cheaper than their Apple equivalents, what is also noteworthy is that the range is more expensive than the usual Xiaomi flagships. Which is where one believes Xiaomi is breaking it’s mould of “inexpensive” smartphones, and busting a self-imposed glass ceiling of sorts.
The phone goes on sale in China on April 28th, for now, and will come to selected markets soon, as Xiaomi prepares for its global launch.
There is a lot riding on this phone for Xiaomi. given how the company has suffered a sales slump in the last year. They have been trying to build off on it, with the idea of treating this as a transitional period after growing too fast. But this could indeed be a make or break for them, a device that could put them back on track, and finally back on the scene.
When it comes to machine-based deliveries, airborne drones get a lion’s share of attention and mindspace – thanks to Amazon’s and Google’s super-publicised attempts to conquer the space first. Unfortunately, equally adept (and in many ways, more immediately doable) land-based experiments have been left out of the limelight.
A San Francisco-based startup called Marble who’ve been conducting food deliveries via robot might be about to rectify that oversight.
Marble’s robots are washing machine sized robots, that roll about delivering food autonomously. The practice is simple – orders are received via an app, a robot then rolls over to correct restaurant, gets loaded up with the food, and then rolls on over to the customer’s location – all in minutes.
When the robot arrives, the recipient punches in a PIN they received upon the confirmation of the order, and the robot opens up its loading bay for food to be picked out of.
People would undoubtedly be patting the robot on the head, the way’d do to a cute kid doing the delivery run!
The technology of course, is still work in progress. Marble has already built a fleet that is they’re using to run local food deliveries in San Francisco’s Mission District, as their pilot program.
And this pilot program, like any other, needs to be monitored very closely. Hence a human minder walks alongside the robot. Even though the robots have been designed to function autonomously, Marble is observing performance closely and fine-tuning the tech and software to hit the road running (so to speak) when the time comes to expand operations.
In addition to the walk-along minder, each robot is also constantly watched by remote personnel, via a video camera. So, if anything goes wrong, the human minders are there to ensure the safety of the robot, as well as others around it.
Even though this approach eliminates any potential savings from eliminating human personnel at this point, it is not much different from how autonomous cars in testing have to be accompanied by a driver and an engineer even though they are fully capable of functioning entirely on their own.
On first glance, the robot is a little bulkier than one expects, and it’s not even the most visually appealing tech one has seen. But these are early days, and as the product evolves, it’s going to become much sexier!
When the technology progresses enough that a human minder is not needed to shadow the robot anymore, the robot would then be able to make faster deliveries, and would also be far more cost-efficient.
The question thus is of ‘when’ and not of ‘if’; robots doing deliveries are definitely coming to our lives, sooner than later.
And if you thought Marble was alone in staking out this technology – far from it. There are more, working on a technology of this kind, across different countries.
Most noticeably, there’s Starship Technologies, Marble’s leading competitor, also based out of San Francisco, who have had similar programs running since January 2017.
Looks aside, one would have to blind not to see that the proposition has tremendous potential -it’s not just food that can be delivered by platform of this kind; a fully empowered fleet of this kind could account for a lot of other ground-based cargo as well – couriers, mail, supplies and perhaps even pets!
Technology of this kind can then be instrumental in revolutionising ground based logistics.
It was Amazon that last did something of the kind, when they brought out their two-day and one-day delivery schemes, something that has become a sought-after standard in e-commerce the world over now. But one must be circumspect; Amazon had to work relentlessly to devise a system to ensure a delivery time that short could actually be maintained.
All these possibilities also feed into the debate between flying robots – a.k.a. drones – and ground-based robots. While flying robots will have the advantage of always being faster, they will also have more serious concerns – safety, noise, and environmental impact than ground-based ones. They might also end up being more expensive to operate, and thus not the best choice on a larger commercial scale.
On the other hand, people might ultimately find it more annoying to share their sidewalks with herds of ground-based robots than to have swarms of drones flying overhead, so you never know!
All said and done, I see most of this autonomy as inevitable, and we as the human race need to acknowledge that that day will soon dawn when we’d be spoilt rotten by it, and have nowhere to go because all we need will magically be arriving at our doorstep or falling gracefully out of the sky! Except, perhaps, Jeremy Clarkson – because hey, Jeremy is anything but graceful!
Google. Hyperlocal. No Surprise.
Google, Hyperlocal, India. Big Surprise!!
Well, Google has finally stepped into the mushrooming hyperlocal services market – in India!
The search giant of the world has quietly launched a new delivery and home services app for India. The app is called Areo and is currently operational in Bengaluru and Mumbai.
The app covers a range of services including food delivery, home maintenance, fitness, electrical repairs, etc. – things an average Indian needs and uses around home. Hence Hyperlocal.
The app is not the service provider though. Like many others before it, Google’s Areo is but an aggregator of the existing services in the market. It works with local on-demand players like UrbanClap, Zimmber, Freshmenu, Box8, Holachef and Faasos, powering the reach of these partner firms by booking appointments, scheduling deliveries, and even enabling payments for them – all online.
“We are constantly experimenting with ways to better serve our users in India”, a spokesperson for Google said in a statement. “In this case, Areo makes everyday chores and ordering food easier by bringing together useful local services like ordering food or hiring a cleaner in one place“.
Google declined to comment upon how it got into this market. The start point though, can simply be attributed to two things – the growing success of Google Shopping (in international markets), and the aforementioned mushrooming of aggregated and lifestyle services.
Another clue: The idea to launch this in India can be traced back to over a year ago. Reportedly Google approached Zimmber’s CPO, Siddhartha Srivastava about 9-10 months ago.
Srivastava said that “The search giant was keen on firms that own the delivery model and have a strong technology base“.
“We have invested very heavily in honing our technology integration for this platform over the last 5-6 months, which is a cost for us”, Srivastava expanded. “We expect 10-15% of our revenues to come from Areo. That will be a cheaper marketing strategy for Zimmber. If they market Areo as they have done for other services or products in the past, the revenue growth (for us) could be higher“.
The service has been online and operational as a pilot program (only on Android for the moment), for Google employees for about three months now with Google actually footing the entire bill for the pilot service for its employees!
One must acknowledge that what Google has made quite a smart choice about how it has chosen to go about the idea. What Google has done is to hit a double-volley with the same shot: Instead of trying to build their own system for what is certainly quite a dynamic market, it has made use of existing resources and service providers (who are not only hungry for additional revenue, but would also give up their antique Standard Heralds bequeathed to them by their maternal uncles, for a chance to be listed as a Google partner).
But that’s not the smarter of the choices I was referring to.
The smarter of the choices made here is that Google has recognized that the market it is entering is not just a dynamic one, but also a very disorganized one. Far too many players already exist in the Indian hyperlocal services market and there is no convenient way for most users to chart through the mush, to try to identify the best option for their needs.
This problem of the unfathomable plenty, has caused the market to suffer the only known outcome of confusion – abstinence. Through the last year many players have shut down operations, while some others have consolidated their operations.
It is the potential to organize the market and make money while doing that (something that’s Google’s bread and butter anyway), that Google is banking upon, and hopes to take to the bank (okay, that one just felt nice rolling off the tongue. Not every word needs to have a point!)
Areo could potentially be a breath of fresh air for the user, and the suppliers, alike. While this would pose serious competition to players like Zomato, Swiggy and Amazon-backed Housejoy, it would also open up more avenues for the partner firms.
The immense, almost immeasurable fillip towards making their services available via the search giant’s platform, is bound to gain traction. The fact that it comes from Google will help it obtain instant credibility and should also open up a much larger audience.
There’s another implicit ploy that I can sense… Areo would also enable Google to compete with Facebook’s (recently launched) competing venture called MarketPlace.
This is not the first time that India has been a testing space for Google. In the past, Google has tested new products like YouTube Offline, Google Maps Offline and more recently, YouTube Go.
“Increasingly, we realise that we can try things in India — it’s a quick test market — if it works, we can take it outside. Our experience with YouTube Offline worked well in India and we transitioned it to other countries“, Google CEO Sundar Pichai had said some time ago.
Maharashtra Showcases India's First Ideal Digital Village, Proving A Vital Point
The new Government’s push towards achieving a digital India might take some time, for a Pan India result, but that isn’t the case when it comes to its Digital Villages initiative.
Maharashtra’s Harisal village, once known for its poor Human Development Index ratings, is receiving the push it needs from the Government machinery as well as one of the world’s most successful technology companies – Microsoft.
Microsoft adopted this village as a bid to make it India’s first digital village. Not only has it provided the necessary capital, resources and tie ups for the village, but it has supported it and partnered so well with the local and state civil servants, that the project is poised to receive a national award on public services.
Harisal, which is in Amravati district of Maharashtra, has one of the highest amounts of people suffering from malnutrition. It also ranks amongst the lowest villages according to the Human Development Index especially in the parameters of education, employment and income levels. This is especially why the village was chosen and the State Government has reiterated time-and-again that if the project is successful here, then places which are far ahead of Harisal, would take lesser time to transform.
The civil servants primarily concentrated on data that they received and took actions accordingly. They made it clear while addressing the media, that increasing access to Fair Price Shops (the usually prescribed “fix”) would not necessarily deal with the issues the villagers were facing and that certainly did not improve the data on malnourishment.
On the contrary, the focus was primarily given on providing education, employment and health benefits, the issues that were pulling them back in the first place.
Details on what services were provided and how the public-private partnership enable the facilitation of a successful project are given below:
Apart from all these initiatives, the Government is also planning to establish a CDAC in the village to provide mobile units and satellite communication to establish contact with specialist doctors.
Close to 54 more villages around Harisal are poised to benefit from this tie up and adoption.
It will probably be the understatement of 2017 to say, that the village has hence seen much progress – from people having no phones, to villagers now communicating on Skype is the jump in accessibility that best showcases (and markets) the government’s digital village initiative.
More power to such initiatives, we say!
The Swiss Are Using Autonomous Drones To Fly Lab Samples Between Two Hospitals
Like automated homes, drones have had a long hurry-up-and-wait life. Already proven for a variety of activities and purposes, drones have been limited by local regulations. Consequently, despite the technology being fairly reliable and improving with each passing month, drones haven’t yet made their presence felt.
That doesn’t mean that drones are sitting idle either. There have been many, many success stories already. We’re about to tell you about another one now – happening in an unexpected land, and for an unexpectedly critical purpose.
Tapping into the true potential of drone deliveries, Switzerland’s postal service has started to use a drone to fly laboratory samples and other medical payloads between two hospitals. This is currently happening in the Swiss city of Lagoon, near the Italian border.
To be able to pull this one, the Swiss national mail service partnered with California-based drone company Matternet to supply the drones and the flight system.
The quadcopters that Matternet has been using so far have been small; 31 inches in diameter. Even at that small a size, they can carry a payload of about 4 pounds easily, while traveling at a speed of 22 miles per hour.
For now, the drone flights are remotely monitored and recorded, but there has so far not been a need for intervention of any kind; the drone is successfully functioning in it autonomous abilities.
You’d be surprised to know that most drones are manufactured with the idea of them being autonomous, however, that level of functionality had not really been put to use yet.
The program has been on for two weeks for now and has reportedly completed 70 successful flights in this time, which proves that these machines can function autonomously, provided that the software is good enough and weather conditions support the operations.
What is surprising though is that it is the Swiss government that is making this happen for the first time.
We would have expected someone like Amazon to be the first ones, especially given how relentlessly they have been working on making drone deliveries possible. To their support though, the tech part of this is not the only thing they have to work out.
The program became functional only after it was approved by the Swiss aviation authority, which has granted permission for the drone delivery service to run in a test phase. Getting approvals of the kind has been what has been hindering uses of drones in a lot of countries, like the U.K. and the U.S., standing in the way of Amazon and others, who might want to run a similar program. The problems they are facing are of acquiring permission because the rules, for now, state that a drone can only be flown within the direct sight of a person.
While we understand the reason for rules of the kind to exist, especially given the increasing number of personal drones in certain western countries, but they clearly stand between drones and their true purpose. The true commercial use of drones is in delivering things when a person is not available to do so. If a person has to follow the drone around, that defeats the entire objective of making it autonomous.
So from the looks of it, Swiss policies might be a step ahead of the others when it comes to accepting tech development and giving it the space to test itself out.
As far as the plans of the Swiss postal service are concerned, they plan to make drones a regular way of ferrying supplies between the two hospitals. We believe it would also be safe to assume that they plan to run more experiments of the kind that would enable them to integrate technology more closely into making the human life easier.
Let’s hope this technology soars, as there are plenty of such uses that beg for better and more effective means of transportation.
Alphabet’s Verily Launches Study Watch - A Health-Focused Smartwatch
The market for body mappers and health readers has been increasing steadily – especially in the hospitality and fitness industries around the world.
Well, where there are customers – and customer data – Alphabet (née Google) can’t be far away.
Verily, the life sciences business division of Alphabet (Google’s parent company), has developed a smartwatch that can passively capture health data for medical uses.
According to the Verily’s official blog post, the device can track signals related to cardiovascular, movement and other medical data points.
The study watch measures ECG and electrodermal activity to gather loads of big data for analysis, which provide further insights into a person’s health conditions.
The Study Watch, as it is called, uses a two-point ECG – one source is the watch on the wrist while the other source is created when the user touches the metal bezel of the watch with his other hand.
Clearly, this is no run-of-the-mill smartwatch with some basic additional functionalities. It is clearly a medical tool.
As mentioned on Verily’s blog post, the architecture of the Study Watch was made specifically for high quality usage and seamless signal usage.
The company mentioned that the watch would be used in a Baseline Study – a Verily project that is aimed at establishing what a healthy human looks like, and also be used in the Personalised Parkinson’s Project, a multi-year study to identify patterns in the progression of Parkinson’s disease, giving way to a more personalised treatment.
The watch, unlike it’s distant cousins in the market, isn’t bulky and the processor that being used can easily manage and encrypt the data generated by the user.
That said, one of the major concerns with all smartwatches has been their battery life. With the Study Watch though, the company promises a week-long battery life for the device and also enough storage for the device to keep weeks’ worth of raw data, eliminating the need for continuous cloud sync.
The watch also has the capability of getting Over the Air updates, which indicates that the interface might change over time. The only catch is that this state-of-the-art device is not for sale. It’ll be given out to participants who will be participating in Verily’s medical studies.
Tech companies are usually not trusted to manage health data and their efforts at consumer health products or apps have garnered little or no interest at all. That’s primarily because the “health” capabilities of most wearables and trackers has been awash with basic gimmicky stuff only shoehorned into them, to justify their very existence (and to provide some form of superiority over smartphones).
There are a few commercial predecessors to the Study Watch, and the most important one being the FitBit Charge HR. This device is capable of monitoring the heart rate, the amount of calories burnt, the number of steps taken and much more, but without the uncomfortable chest strap.
The Garmin Vivo Smart HR+ also does the same stuff, but is a pricier variant of the FitBit.
Apple’s Watch is the only one that has somewhat caught the fancy of the masses. It does things well, including measuring heart rate – in fact, some instances were reported where the watch tipped off users about health emergencies after obtaining unusual readings.
But no company till date was ready to say that the watch could diagnose any diseases, and Alphabet may be a little ahead in the game with their Study Watch.
The Indian Market
India has been the target for all international companies, as Indian users amount to a considerable size of the global smart device sales. The Indian community is a growing digital market and could prove to be a valuable ground for some smart device companies – firstly because of the tech savvy Indian youth, and secondly due to the governments “go digital” idea.
Will the Study Watch catch on? Well, considering it’s not a retail product, it’s reach will be limited to the market of serious users. That said, Epi Pens, Diabetes tests (Glucometers), even pregnancy tests and other such home-use medical tools are a multi-billion dollar industry in the U.S. alone! Imagine the potential that the Study Watch has, if provided economically and promoted empathetically to India’s 1.3 billion population…
Alphabet’s going to have to play this smart – and knowing them, as well as we do, they will.
Bixby Voice Will Run To Catch Up With Samsung Galaxy S8 Later In The Year
Just a couple of days away from the much-awaited launch of Samsung Galaxy S8 and S8+, the new Samsung flagships, there’s some news that may drive you to grab a coffee and drink it in solitude.
You, like the rest of the world will have to wait a little longer to talk to Bixby, Samsung’s voice-enabled Artificial Intelligence-empowered assistant, that was going to be one of the highlights of the flagship devices.
While some features of the AI assistant Bixby, such as Vision, Home and Reminders, will be available on the devices starting April 21st, the company recently released a statement informing the world that the AI assistant will be released only later this spring.
“With its intelligent interface and contextual awareness, Bixby will make your phone more helpful by assisting in completing tasks, telling you what you’re looking at, learning your routine and remembering what you need to do. Key features of Bixby, including Vision, Home and Reminder, will be available with the global launch of the Samsung Galaxy S8 on April 21. Bixby Voice will be available in the U.S. on the Galaxy S8 later this spring“, the company said.
They did not give any reasons on why the full roll out of the Ai assistant is being delayed, but word has it that the software was still lacking in it’s English-abilities in the days leading up to the launch.
They had previously mentioned that Bixby might not come with the phones in certain countries, the U.K. being one of them, but now the South Korean megabrand is holding out on releasing the AI enabled assistant into other markets, including the U.S., as well.
The delay has only served to strengthen the speculation that Samsung’s voice recognition system in English is not nearly good enough and substantially lags behind Bixby’s performance in Korean.
In the light of their Note 7 fiasco, it is understandable that Samsung would be circumspect about half-baked products, and would want to put their absolute best foot forward. Customers are willing to love Samsung again, and their love would be immense, but the products have to be just right.
Bixby has been quite a big deal for Samsung, especially given the stiff competition that already exists for them in the market in this regard. Apple has their Siri, Google has its own voice-enabled Assistant riding on Android devices (the baseline OS that Samsung smartphones actually run on), Microsoft’s Cortana had its couple days of glory a little while ago, and there is Amazon’s Alexa that is making rounds now.
A lot of development has happened in the virtual assistant field, and there is a lot that Samsung will have to top to actually make its efforts noticed.
To be honest, they only gained enough confidence to enter the AI voice-enabled assistant battle after they acquired Viv Labs last October. Even then, Samsung is still quite behind in the game when it comes to AI voice-enabled assistants.
But if there are two characteristics that Samsung has demonstrated repeatedly, they’d be grit and determination. There’s not a bridge that Samsung has not overcome in it’s uncharacteristically tough journey in the forever-effervescent smartphones journey. And they’ve learnt the invaluable lesson of “product quality first, revenues later”.
So, while Bixby isn’t saying hello quite yet, we should take heart from the fact that brands that falter on product launches, and then have the gumption to pull them back post-launch (remember the Apple Maps bombshell?), do have the credo and the ability to resurrect and better themselves.
Bixby, we’re here for you. Whenever you’re ready!
Airtel Can Turn Your Basic TV into A Smart TV!
Get ready to be surprised, people!
Your simple, unassuming, old-model TV just got blessed to become a whole lot more. And guess who made that possible? Airtel!
Sunil Taldar, the Director and CEO of Bharti Airtel’s DTH arm, launched a set top box that runs on Android TV and turns your good old dish-based TV into a smart, internet enabled TV.
What does this mean – well, for starters, you won’t need to tire out your eyes binge-watching your favorite series on your laptops anymore. You can just get the new Airtel Internet TV Set Top Box (STB) for INR 4,999 – it even comes with a free one month subscription for all HD and SD channels. More price talk later. First we talk about its features.
This STB supports the Google Play Store, so you can play any game and use any app you want. It’s a hybrid STB, so it can handle both your regular TV channels as well as internet streaming. Netflix, YouTube, Airtel movies, and more are pre-installed. Amazon Prime, Voot, Hotstar etc. may be coming soon too.
Airtel has clearly targeted the needs of the majority of Indian population by introducing this STB. A lot of us prefer to do all our internet streaming on our laptops or phones rather than buying external streaming devices. For those people, Airtel has integrated all of it into a single STB that fulfils our desire of a bigger screen for Netflix and the like.
Taldar told Hindustan Times – “Airtel’s new set top box can make any dumb TV smart and kills the need for having streaming devices such as Google Chromecast or Apple TV as it comes with its own casting hardware”.
The STB offers an amazing range of connectivity too. You’ll get a Wi-Fi receiver, an HDMI port, Bluetooth, AV port, 8 GB internal memory, 2 GB RAM, and 2 USB ports that can expand storage up to 2 TB (for you to record content and watch it later).
If your internal storage runs out, apps can be installed on the external storage too. You can also connect your phones by attaching Bluetooth dongles. It supports 4K content, and facilitates the recording and rewinding of Live TV channels on external storage devices.
The remote will be a touchscreen unit, along with support for voice commands. So no more browsing channels endlessly to find the one you’re looking for. You can simply tell the remote what you want to see and it will put it on for you.
Gaming experience will be better too with a separate controller included only for games.
The new STB is aimed not only at Airtel’s own DTH customers, but also at their broadband customers. An extra 10 GB for plans under INR 999 and 25 GB for those above INR 999 is being offered to all their current customers. Though a speed of 2 Mbps will work, Airtel recommends a 4 Mbps connection for the best streaming experience.
If you already use Airtel DTH, you can get it upgraded for a discounted price of INR 3,999. And if you can spare INR 7,999, you can get the new STB with a year-long subscription for all channels.
For a period of 30 days, it is available exclusively on Amazon in top 20 cities only. After that, you can get it at other online stores and Airtel Digital TV stores as well.
Failing to keep up with Jio, but determined to retain their niche in the Indian market, Airtel has finally come up with a first-of-its-kind technology. Offline and online TV worlds can be integrated smoothly with this STB. This is everything we could have asked for!
AirPlay happened on Apple devices, then Chromecast arrived, then Apple launched Continuity and Handoff, which made Microsoft launch Continuum on their recent devices. Evolution clearly needed the next avataar.
Say hello to DeX, that comes as a sidekick to the new new Samsung Galaxy S8 and Samsung Galaxy S8+.
If you’re interested in accessing your phone on your desktop, or to use it as a demi-computer, the new Samsung DeX dock enables you to connect your prized new phablet to an external monitor, keyboard and a mouse and use it for a variety of purposes – full-screen entertainment, gaming and even computing.
The DeX dock is supported by a tweaked OS that drives the phablet. While the UI is very basic, but it allows the consumer to use a majority of apps in full screen mode. It has a lock screen, a desktop, and a Chrome OS-based taskbar that displays tabs for open apps. Clearly, this tweaked OS was built specifically for the Galaxy S8 and S8+ as a lot of the onboard apps have scalable interfaces that work on the phone as normal, but grow into full-screen, featuring desktop-like layouts and are resolution-optimised versions for use on larger screens. The tweaks to the OS also enable individual applications windows to be resized and minimized, much like you do on laptops and desktop computers.
The Samsung DeX comes equipped with an HDMI port, two USB ports, Bluetooth connectivity and a wired Ethernet jack as well, which is surprising given the fact that cell-phones are innately wire-less.
It is also rumored to be having a cooling fan which presumably will keep the phone cool, and enable a more efficient, desktop-like experience.
Gratifyingly, the list of apps that work with DeX is not restricted to Samsung’s own apps. Cannily, Samsung has partnered with Microsoft and Adobe to bring Microsoft Office and Adobe mobile apps to the DeX interface. The DeX is also compatible with virtual desktop apps like VMWare, Amazon, Workspace and Citrix.
Phone functionality will remain untouched with the Galaxy S8/S8+ doing its own work in the background with no intervention from the DeX interface. Hands-free phone calls and text messages can be facilitated through the desktop too.
What is the market for the Samsung DeX?
Targeted specifically towards business-oriented work, the Samsung DeX provides employees and businessmen with a secure access to their digital workspace, coupled with all their requisite business apps and data, right at their disposal.
This device actually addresses a common yet critical problem faced by consumers around the world. Much like WhatsApp Web catered to individuals who wanted to chat informally or formally while accessing their desktop, the Samsung DeX brings us one step closer to importing our favoured communications device into a work-compatible desktop platform.
And, the professional in you would agree – there’s potential in this market, because there is an immense need for such hybrid and integrated solutions.
However benefits come with a price. The Samsung DeX would have to be purchased separately for a pricey USD 150. When we add up the fact that the Samsung Galaxy S8/S8+ is already an expensive buy, the DeX may often be foregone at the altar of Budgets.
In the end, it becomes a benefit versus cost decision.
As a precedence, the Samsung DeX might not end up to amounting for a lot of sales in this year – a consortium of companies like Linux and Microsoft Phone have already developed similar interfaces which enable mobile phone applications to work on their desktops – but the results haven’t been favorable for their offerings, either.
Anticipating that, Samsung clearly devised this product for premium and enterprise customers (who can surely afford the accessory), and given what the DeX can do, it does amount to a good buy if you belong to the said group. But, in order to sustain good margins and sales, it would have to stimulate the DeX for budget phones as well, which may also help budget-Samsungs pull ahead of the competition in that price range too.
Windows 10 aims to surprise its users by its latest update announced on April, 11.
Also called Creators Update, this update provides users with new experiences that will undoubtedly be appreciated by all users, irrespective of their tech-affinity. It’s fairly apparent that Microsoft is working hard to meet the diverse requirements of it’s very large user base.
Here are the top 10 features that got me all engaged, and might cause a stir in you too:
So, there are lots of goodies and I can’t wait to get my hands on this (officially). However it’ll be a bit too premature to gauge the scale of Microsoft’s success with these changes, at the present moment – let’s wait for the penny to drop, to know if we like the sound it makes.
An age of digital consolidation began dawning upon us a few years ago – where different facets of our daily lives started being merged into large, multi-facted platforms that were capable of supporting the varied needs and functions of our lives.
This trend is only going to intensify in the next 4-5 years. It is as if we started with the Big Bang but are now dawning into the Big Crunch epoch.
Lest I’ve been confusing with my metaphors, let me assure you, I’m not talking about anything related to science fiction, but about Facebook and its subsidiary WhatsApp.
Facebook’s Messenger app has been stretching it’s muscles and trying very hard to grow into a full-fledged platform. In that journey, Facebook recently added a payments feature (already present on person-to-person chats since 2015), into Group Chats.
An extremely interesting and significant move, which will not only enable people to do many things – transfer money within the group or contribute/chip in on a common purchase or a restaurant bill.
Let me elaborate.
Let’s say a group of friends intend to play football (soccer) over the weekend. The cost of the football comes to USD 7, and since the group comprised 5 members, each must pitch in one and a half dollars to purchase the ball. An organiser (or any participant for that matter) can request every member to pitch in the required amount. As the members start pitching in, the Messenger will notify who all have paid (and how much), and how much has been added into the virtual ‘piggy bank’.
The money can be received and transferred to the bank account of the ‘organiser’ (or any other deputed SPOC) after the goal has been reached.
Why is this important?
Well the answer to this question is subjective. If you happen to be using a digital wallet frequently, there aren’t many wallets that let you collate and use the money for a specific purpose. You could do it manually but then the context of a Messenger app making it easy for group members to purchase something together, just adds another simple yet effective cause for adding a feature. Adding an emotional quotient – now sharing the restaurant bill would become way cooler as a practice – much easier, less embarrassing and requiring lesser personal followups!
To access this feature, just click on the ‘+’ button on the bottom left side of the Messenger app. Then you need to click on the button with a dollar sign to use the Payments feature.
Facebook continues to maintain that the feature was not added for a successful payments business, but to provide an app to consumers which is efficient, useful and satisfies a certain need. Which does seem like a legit mission, given the fact that it does really solve a problem.
Perhaps the only issues (which may only be prevalent temporarily), is that this feature is only accessible in the U.S. for the time being,
Before I end, there’s something that makes things even more interesting – WhatsApp, India’s most used messenger app, which is owned by Facebook intends to release a Payments feature very soon. Some analysts expect it to be released in another six months.
If the concept of the payments system is like what the Messenger is providing now, it would certainly create a big fuss amongst its daily users. WhatsApp’s voice call and video call features have been received very well in the country, and given the recent demonetization move of the government, people are very receptive of applications and systems that facilitate payments online easily and efficiently.
Google's Rolling Out A Fact-Check Feature To Combat Fake News
We’ve been covering the increasing prevalence of Fake News, at our site, for quite a few months now. We’ve also highlighted what Facebook (the main protagonist of the story, so far, has been doing to combat the widespread menace of half-truths.
In an attempt to deal with the growing problem of fake news and misinformation, Google has now decided to roll out a feature to “fact-check” search results and news looked at through its platform.
So, going forward, when you search for something on Google, if the search query returns a result that has been marked as disputed or fake, Google will highlight the matter and display who made the claim, and will also indicate if a third-party organization (identified by Google) has found the item to be true, false or somewhere in between.
In addition to the review of the information, the new feature will also provide users with a link, so that they can provide feedback in case they think something is wrong.
However, the “if” in that statement is quite important, because not all results will appear with a review, for now.
Google first tried this out last year in a limited capacity on their news results, just a few weeks before the U.S. Presidential elections. Now, they are rolling it out completely, in all countries, and languages that Google functions in. It will now encompass two of Google’s biggest and most far-reaching entities – Search and News results.
Google’s move comes right after Facebook put up a “disputed” feature on their platform, to flag news that might not be accurate, or might not be coming from trustworthy sources.
Internet platforms have been receiving a lot of criticism for the spread of fake news, and misinformation, to an extent that Germany is currently in the process of establishing a law to fine social media platforms it deems are contributing to the problem.
Some other countries in Europe are considering a similar approach.
In the aftermath of the completed unexpected selection of Trump to the throne of U.S. presidency, the news online has become pretty much a topic of distrust around the world.
Not wanting to painted (or tainted) in the same colour, Google has played it’s own Ace of Diamonds, through this move.
The key here is that Google has not taken on the task of fact-checking the information themselves – instead opting to rely on specialist and better-equipped third party organization like Politifacts and Snopes, to assess the veracity of statements made by public officials and news organizations. There are about 115 of those !
This is quite interesting because even Facebook, when they rolled out their “disputed” tag, took the responsibility of the verification out of their organization, and put the onus on third party organizations like Politifacts and Snopes. They also enabled their own users to report stories that might be misinformation, in a manner different from what Google is doing, and even though, on the face of things that does look helpful, it might run them into a huge task of sorting out through claims of the billions of users.
As far as the credibility of Google’s verification is concerned, it openly says there might be times that different fact checking organizations might disagree, and it would be listing out the disagreement as well.
“These fact checks are not Google’s and are presented so people can make more informed judgements. Even though differing conclusions may be presented, we think it’s still helpful for people to understand the degree of consensus around a particular claim and have clear information on which sources agree. As we make fact checks more visible in Search results, we believe people will have an easier time reviewing and assessing these fact checks, and making their own informed opinions“, Google said in their announcement blog post.
While we are indeed happy that Google has decided to come on board in a fight against misinformation, we are uncertain of how effective this will be, for now. The amount of information that goes on to the internet everyday is obviously a lot, and it may take third party sources several days before the information can be verified. And this stands only for information that is yet to be put on to the internet; there does not seem to be a method to verify the information that already exists effectively.
What might be another “umm” moment is that what this fact-checking feature won’t do is improve the search rank for fact-checking sites or bring their information to the top of the page in Google’s “featured snippets” box. What they should have ensured is that the review snippet display on top of the search results, regardless of however few searches it shows up on for now.
Regardless, we welcome this first step, because that is better than none!
Google Play Music Subscription Launched In India At ₹89 Per Month
To the joy of many, many Indians, Google had launched their streaming music app, Google Play Music last September.
However it had come with a bulky model – lacking a subscription feature, users had to buy the album or individual songs in order to listen to them.
Well, the good news of the month is that Google Play Music has finally launched their subscription model in India. The paid subscription will cost the user INR 89 per month, and will have a host of features to justify that cost.
First and foremost, it will allow users to enjoy unlimited streaming and downloading of music!
Imagine, you could listen to, download and play offline, any of near-40 million tracks that are available on the platform – with just one charge!
Thanks to Google’s mind-bendingly good search capabilities, you can search for music by language or by your favorite Bollywood artists, even music directors. The app will also throw up video options of the songs if available.
“With Google Play Music subscription, Indian subscribers can listen to their favorite music across a variety of languages, including Hindi, English, Tamil and more. This music can be accessed from any device with your Google Account”, Elias Roman, Lead Product Manager, Google Play Music, said.
There’s more – From a personalisation standpoint, the app will provide you with an offline playlist based on what you’ve listened to recently and will allow you to listen to that music offline, even if you haven’t downloaded those songs ahead of time.
This feature also exists on other similar apps in the market, including Saavn. But that shouldn’t be an impediment for Google, as they have a heck of a lot on offer, given their huge repository and immense data mining capabilities.
“To make the experience deeply personalized, we’ve plugged into Google’s understanding of context and machine learning to recommend the right music at the right moment based on each listener’s preference, place, and activity”, Roman added.
The company is offering a 14 day free trial to its users, and offering discounted rates to those who sign up within 45 days of the rollout. The discounted rate of INR 89, might go up to INR 99 for those who sign up later. The service will be available on Android, iOS and the web.
Even though the move is a good one, Google might be a little too late to the Indian music streaming and downloads game.
Local players like Saavn, Gaana and Wynk have already cornered major chunks of the market, with Wynk having recently reported 50 million users (as have the others).
The only other international player in the market, after Rdio’s shutdown, is Apple Music, and it has a customer base which functions of the loyalty to Apple, and not so much on what’s popular in the market.
It is, however, noteworthy that Google’s subscription price, at INR 89, is lower than that of anyone else in the market; Apple Play is at INR 120, and the local players are all in the range of INR 99.
Thus in order to find a place for itself in this dynamic market and to carve a niche for itself, Google Play Music is taking a few different approaches.
For starters, it has recently added a new ad-supported tier to its services, which allows its users to upload upto 50,000 of their own songs to its Play Music, without having to spend a single rupee in doing so. This would enable them to build a YouTube-like library of audio data.
The second interesting thing that Google is doing is offering radio services to its free users, which includes playlists depending on mood, activities, and situations. This works on a location-based algorithm, where it can assess your location and figure out whether you’re at work, home, or traveling, and then plays music for you accordingly. A lot of the other services offer radio services too, but they may prove to not be as effective without the location/time-based customisation of the music they serve up.
It is also interesting to see Google taking steps to go deeper into a market like India, which works a great deal on piracy. Illegal music is downloaded off of the internet all the time, and most smartphone users play that, at least partially, if not fully.
A business model like the one that Google is using has proven to work in the market where piracy is a smaller factor, but in a country like India, it’d be interesting to see how it comes along.
Disclaimer: This article is tech-intense, and there’s no real way to ease the jargon. So if you struggle through the article, don’t blame yourself. I had to read the material multiple times myself.
If you’re fully awake, and bright eyed and bushy-tailed, then get ready to wade into the land of techalese!
AJEDEC, the organisation that is responsible for innovating and upgrading the standards for computing memory intends to demo a new version of RAM called “DDR5” by June this year.
AJEDEC maintains that the standard would be finalized by 2018, after rigorous testing and certification of conformity with norms.
The features which are expected to undergo significant upgrade thanks to this upgrade are the memory bandwidth and the density (even to the extent of being twice) that which was offered by the DDR4 RAM. The efficiency is also set to increase, however ensuing figures have not yet been released by the organization.
As mentioned before, the DDR5 RAM will take a few years at least, to be released into the market; much like the DDR4 RAM had taken. Such roll outs tend to be long-tailed events, as compatibility with the chipsets and installation into computers usually only happen two years after the finalisation and public release of the the standard.
This in turn, is because memory controllers and SoCs require upgrades to support the new standard. Adding to that, the chipset changes take a period of around two years to complete from start to finish.
The good thing though, is that most computers (and now) smart devices are still quite well served by the current DDR4 equipment, and are nowhere near begging for a change.
In fact, you may be surprised to know that all devices manufactured to this day, by the marquee manufacturer, Apple run on the (n-1) LPDDR3! Apple hasn’t yet upgraded its products to DDR4 yet, citing ‘battery life considerations”.
Even though, the “LPDDR4” – a low-powered variant of the DDR4 RAM exists, it is not supported by Intel’s Skylake processors that are currently incorporated in the latest MacBook Pro models.
Coming back to the DDR5 specification, it has nothing to do with the decade old memory standard GDDR5 (in case that was bouncing around in your head), which was dedicated for graphics cards and gaming consoles.
The DDR5 will improve power management in various applications and drive higher performance through the entire power band. It will also enable a more user-friendly interface for server and client platforms.
Yet, despite the innovation and formulation of new standards in RAM, the future tends to point to the direction of RAM-free computer operations. Intel’s Optane drive is one such example of where the computer industry is headed. These drives are proving successful in combining the density, capacity and the non volatility of the SSD with speed that is slowly matching that of RAM. The latency attributed to Optane drives are 10 times more as compared to the RAM’s speed, yet future innovation and technological improvement may address these issues swiftly – thus eradicating the need for separate pools for storage and RAM.
The day that happens successfully, it will not only mark a fundamental shift in the way computer devices are likely to work in the future, but it would almost surely, ring the death knell of the RAM as we know it today..
JEDEC plans to disclose more information about the DDR5 specification at its Server Forum event in Santa Clara on June 19, 2017, and then publish the specs in 2018.
For now, this enhancement is going to be the benchmark of processing power (chipsets aside), till it’s nemesis emerges from the labs of some brilliant inventor.
Google Brokers A Consortium Amongst Top Android Partners To Increase Mutual Benefit
If you are a mobile manufacturing enterprise that produces Android-backed smartphones or tablets, then things just got sweeter for you!
Google and several top-drawer Android device manufacturing companies have agreed to a truce that will bring more openness into the Android applications and software market.
The agreement, namely the “Android Networked Cross License Agreement” has been melded together between a group comprising of Android giants Google, HTC, LG, Samsung, HMD, Foxconn and a variety of other companies. It pledges to share royalty-free patents amongst each other.
Licenses are going to be granted royalty-free to any company that manufactures devices with pre-installed Android applications which meet Android’s compatibility norms, with the condition that they join the group and adhere to the agreement.
The agreement is also being coined as PAX by executives at Google, which means ‘peace’ in Latin.
Jamie Rosenberg, Google’s Vice President of Business and Operations of the Android and Google Play wing said in an editorial, “It is with a hope for such benefits that we are announcing our newest patent licensing initiative focusing on patent peace, which we call PAX”.
On the PAX website, it mentions that any company that wishes to join, shall not be a party to interference, as all the other members will respect each other’s autonomy in their own affairs, as long-term freedom of action related to Google and Android shall be accorded everyone concerned.
What are the obvious benefits for signatories
The website also sheds light on certain facts such as that of the current member companies having a combined patent inventory of more than 230,000 patents. Hence, Google is quite excited and interested in welcoming other companies, large or small, to become signatories and reap the benefits of a sustainable, peaceful and friendly Android ecosystem.
Commercially, what the agreement will help the companies indirectly with, is the might to fight patent lawsuits collectively. As lawsuit rulings in favour of companies which acquired lawsuits amounted to a certain amount of income, the group might sue other companies together if the need arises. The direct benefits for companies are very high as there is no need to pay royalties to a ‘partner’ company.
Google, Samsung and HTC will really benefit from PAX. This is because, the Android ecosystem, owner by Google, will get a wider spectrum of companies of varying size in its family. Therefore, multiplying the acceptability of Android.
As a competitor of iOS, Google would really benefit. The smaller companies which feared litigation, would be exempt from it. Similarly, Samsung and HTC selling a huge number of devices integrated with Android and Google applications, looks to benefit the most. There would be hardly any risk form patent trolls given the nature of the agreement and the willingness to fight the lawsuits collectively.
However, it is not yet known what kind of patents will be shared or what threats these companies wish to defend against. That is the kind of details we would have to keep an eye out for.
Good or bad, we all have some opinion on the recent demonetization fracas. Whether it really helped the country, as the government claims, is still a question. But it did have one benefit – it propelled Indians to whole new paradigm – that of digital payments.
Paytm had been on the scene long before demonetization, and it had been somewhat popular (primarily being a competitor of credit cards in the country). However, the lacklustre implementation of Modi’s new-money policy funnelled a whole new generation of customers down Paytm’s throat. Consequently, Paytm became the dominating digital money player in the country – overnight.
It wasn’t long before other companies such as Truecaller, PayUmoney, Airtel Payments Bank and Samsung Pay joined the race – each relying primarily on their captive customer base.
Standing afar, I don’t really see any of them having made a mark in the marketplace, leave aside denting Paytm’s dominance.
WhatsApp is all set to join the foray – which isn’t all that unexpected, to be honest.
It’s no secret that Facebook wants to be everywhere in your life – at parties, at airports, in bed – so why not in your wallet?
In fact, the Facebook owned messaging app is already being used for a lot of business purposes. You can buy almost anything from clothes, jewellery (mostly artificial), watches all the way up to your basic necessities are being sold via WhatsApp.
Thanks to it’s ubiquitous reach, WhatsApp makes complete sense as a person-to-person platform. There are only two things missing – a proper business app/platform for merchants, and a simplified payment service.
What we’re building up to, is that the latter is coming soon!
Much like Paytm’s simple-ish wallet, the obvious highlight of WhatsApp’s payments platform is expected to be it’s usability. The mainstay of usage though, will be spontaneous individual payments (and not scheduled payments or monthly EMIs etc).
There’s something even better hiding in the shadows – WhatsApp’s taken a different approach. There’s going to be no need to take out your credit/debit cards and go through the hassle of entering all the required information every time before you can make a payment. You would fill all of this information only once, when you first use the feature. That’s because unlike the approach used by most digital wallets, WhatsApp will be using the Unified Payments Interface (UPI).
In case you’re asking yourself what exactly UPI is, let me help.
Simply put, UPI merges 21 prominent banks (that have participated in the program) into one single platform. You can use this single platform instead of relying on individual banks’ apps or websites (in case you’re banking with multiple banks, or even have multiple accounts with the same bank).
So when you use UPI through WhatsApp, you can make immediate money transfers at any time from the bank of your choice – easily, conveniently, and from one single interface!
But my personal favourite is that you can split food- and other bills with your friends – real time! When my friends and I go out, often one of us (the nicer people) ends up paying more than the others due to the indivisible quotients of our bill. Most of the times I feel like that the universe connives to do that to me on purpose. Other times, friends kind of “forget” to pay me subsequently.
With the advent of digital payments, I hope not to have that issue.
Facebook’s idea of using the UPI as the foundation of WhatsApp payments in India could also be an attempt to get back on favourable terms with the government, after the controversy surrounding Facebook’s Free Basics in 2016.
But the real question is whether WhatsApp will be able to overtake Paytm, or not. Well, the feature does sound appealing in a lot of ways and the simplicity and convenience that I can already imagine is ingenious.
There are about six months until we get to use this feature, so for now we’re going to have to wait.
I, meanwhile, am going to carry a calculator and lots of pocket change with me for our next Friday Night Dinner…
Saturation leads to stagnation, but sometimes it can also be an incentive to find new ways to progress.
Take the example of chips – the silicon heart of every electrical device you will ever use – they are one of the first few bastions of Moore’s Law.
Continuous progress in technology and materials has made chipsets (and consequently, computers) increasingly smaller, charting a trajectory from a city-sized Eniac to a diary-size Macbook. Yet, as jubilant as today’s inventors may be, they continue to face an immutable concern – “What next?”. More like “How next?”
Their conundrum relates to Moore’s Law, and his estimation of the progress rate towards the number of transistors that can be shoehorned into a specific size of silica, given technology prevalent at the time. Per Moore’s Law, with changes in the given materials and technology of the day the number of components on a circuit can at best, double every. And that, is not a restriction that inventors can wish away.
Moore’s law has governed the chip industry or decades, setting the precedent for further development in the way the chips are made.
But enhancing the computing size in increasingly smaller sized chips, means that technological progress would come to a halt, unless new ways of increasing computing size are found. But there’s a limit to that too – an increase in wiring density also leads to an increase in the heat emitted from the chips.
Now, researchers from the Massachusetts Institute of Technology and the College Of Chicago have come up with a novel solution that can extend the possible shrinking in the size of chips, and provide a shot in the arm to circuit makers.
I suggest you grab your cup of coffee, or latte, or a tall glass of fresh lime before I proceed. It’s going to an interesting read, for a hungry mind. I’ll wait.
Okay, here we go now.
The answer revolves around the principle of self-assembly of the wiring on chips.
Existing technologies use an electron beam to etch patterns on the chip.This process is quite time-consuming and mechanical. The small transistors are forged using a small-wavelength electron beam.The wavelength can be reduced by using Extreme Ultra-Violet (EUV) lithography, but that process is quite expensive and challenging.
With the new process, a mix of two polymers is laid down on the chips, where they form patterns voluntarily. The electronic etching part is same as before. But there’s a twist.
These polymers are made up of chain-like molecules, in which the end-to-end connection is done by two different polymers.The first polymer is heated till it vaporises , then it is allowed to condense on a cooler surface. Then a coating of protective polymer is added on those two existing layers, which allow them to form a dense vertically oriented pattern.
Unlike the existing chip patterns, this pattern is a significantly condensed one, putting four wires into the space of one! This is similar to the process of 3-D Transistor manufacture. And this technology can be used in 7-nm electron beam setup too, instead of a 10-nm setup that is currently used.
The solution was published in the journal Nature Nanotechnology, in a paper by postdoctoral researcher, Do Han Kim, graduate student Priya Moni and Professor Karen Gleason – all at MIT, and by postdoctoral researcher, Hyo Seon Suh, Professor Paul Nealey, and three others at the University of Chicago and Argonne National Laboratory.
The paper claimed that the new process would be cost effective too, for the materials that are added are already used in the chip industry and the process is nearly identical to the existing one.
That said, it’s still going a long time before the process is proven and adopted substantially by the industry. Change does take time, especially in something as widespread and exacting as chip-making.
The paper promises chip speeds that are unheard of at the current time. The best part? Even if these solutions are incorporated, Moore’s law would still stand!
Huawei has been making a rock-solid space for itself in the alt-West markets over the last few years. With it’s latest P10 duo, Huawei is clearly looking to cement itself as one of the preeminent smartphone manufacturers in the world.
Truth be told, it’s done a lot to justify it’s seat at the table too.
The premier devices were announced at the Mobile World Congress not too long ago.
Even though the Mate series is the titular flagship stream for Huawei, but the more diminutive ‘P series’ has been steadily closing the gap, for awhile now.
Outstanding camera capabilities and superior battery life are the particularly impressive features of the devices. And we can’t forget the design side of things too.
Given that the P10 phones are quite apparently close cousins of the iPhone, in terms of the external design, they tend to be quite beautiful, indeed.
Is there anything that tells the P10 duo apart?
When it comes to the design nuances, both the phones are quite similar. The main difference between the two is the display size, with the P10 pulling in at 5.2 inches and the P10 Plus staying at the standard-phablet size of 5.5 inches.
Given the difference in screen sizes, there’s a difference in the resolutions of the displays too (with the larger P10 Plus carring 540 pixels per inch, as against “only” 432 ppi in the smaller P10). The battery on the Plus version is also understandably bigger than that of the regular phone.
The Huawei P10 is powered by octa-core HiSilicon Kirin 960 processor and it comes with 4 GB of RAM, and 64 GB of internal storage that can be expanded up to 256 GB via a microSD card. As for the eyes of the phone, the Huawei P10 packs a 20 megapixel primary camera on the rear and a 12 megapixel front shooter for selfies. It runs on Android 7.0 and is powered by a 3,100 mAh non-removable battery.
The P10 Plus, is powered by the same octa-core HiSilicon Kirin 960 processor as the smaller sibling. The similarities between the two extend to the RAM, internal storage, cameras and the OS too. Fortunately, a bigger 3,750 mAh non-removable battery powers the bigger variant.
What can be considered a highlight of both the phones is their dual camera setup that was co-developed by German optics company Leica. The partnership with the German company did much good to the P9 range and it’s fair to expect that they’ve achieved quite a bit with the new duo as well.
Overall, what we’re hearing is that the stuff on the ‘good’ side of the balance sheet for these phones would be their attractive design, solid build and the colour options will definitely be an add-on. The display is quite good too – rich and vibrant most times. The EMUI 5.1 user interface atop the Android pack is much better than previous iterations. The camera performance is excellent too, and the sizes of the devices altogether prove quite manageable.
The battery that should likely last you the entire day is definitely a plus.
On the other side though, the speakers on the device could have been much better. The camera app might get a little too complicated for some. And the device is not water resistant.
None of these, however, are deal breakers as such.
So all in all the devices should be more than good to compete in the market place.
But most notably, as I’ve said in the past, for many, many devices – the Huawei P10 and P10 Plus are in no way revolutionary, however they prove that incremental improvements eventually start to stack up, and become a more meaningful package as a sum of parts.
Curious About The Samsung Galaxy S8 And S8+? Get Acquainted.
As the elder Wayne taught the not-yet-a-superhero Bruce, “Why do we fall?”, remember what he said said next?
”So that we learn to pick ourselves up”.
Well, someone has picked themselves again, kicking and thrashing their way to progress.
Just like Wayne Manor, Samsung was gutted with fire tests after Galaxy Note7’s volatile exit from store shelves across the globe. Then, with charges of nefarious involvements in South Korea, Samsung’s foundations were almost uprooted.
But now, with the release of it’s newest flagships, the Galaxy S8 and its bigger counterpart the Galaxy S8+, Samsung seems to have reinvented itself. While it did already seamlessly blend together technology and perfection, it’s added a new element into it’s wares – pragmatism.
The phones (they’re actually both phablets) don’t feel like run-on-the-mill phones. Both of them fill your senses like that supercar you dream about. Never before in the field of smartphones, has a phone been so beautiful.
On the “smaller” Galaxy S8 (we can’t really believe we’re using that adjective for a 5.8 inch screened device, either), a large Super AMOLED Quad HD+ display curves around the edges like a beautiful windshield meeting the metal sides, and then curving again, as the rear glass completes the device. It’s the same with the Galaxy S8+, the bigger brother of the two, with 6.2 inches of glass.
So proud is Samsung of these devices, and so sure of their identifiability that they’ve even foregone putting the brand name on the front of the device.
Interestingly, the biggest achievement from Samsung is that the some magic they’ve cast on the screens – a new aspect ratio (18.5:9) allowed Samsung to shoehorn in big screens into bodies the size of the Galaxy S7’s! The S8 is narrower than the S7’s, despite having a bigger screen! In fact the Galaxy S8+ is a complete shocker – it’s just about as wide as the Galaxy S7, but has a much bigger screen. Both devices thus tend to be taller than the outgoing glag.
As is usual with Samsung’s flagships now, they’ve decided to go with different processors in different regions. In Asian markets, Samsung is using its own Exynos 8895 chipset, whereas in the models headed to the U.S., they’re going with Qualcomm’s latest, the Snapdragon 835.
That said, the name don’t really matter – Samsung’s packed a tiger in the tank. The RAM is the same as on the Galaxy S7 – 4 GB; and both the new phablets come with 64 GB of internal storage that can be reinforced with an additional 256 GB of external memory.
The processors are going to be more efficient and kinder towards the battery as compared to other chips (since they’re made on the newer manufacturing process for chipsets).
The surplus battery can be used to power a desktop experience called DeX, which is basically a dock that enables the S8 duo to be used to power a monitor for a full-screen experience – much like Microsoft’s Continuum. The intent is to have this separately-sold dock convert the phone into a mini PC. Equipped with two USB-A ports and an Ethernet Connector, the experience is interesting.
Despite this piggy-backing by the screen and external keyboard etc., the processors allow no lagging to occur. Samsung’s apps too, resize according to your use between the phone and the monitor. If you are being adventurous, then you can stream your actual Windows desktop too! I won’t say that this can replace your desktop, but this is the best that stand in the middle.
As for hard external details, the visuals are eye-catching – the edges of the screen are almost invisible because of the borderless curved display. The fingerprint scanner is curiously placed next to the camera button from where you can comfortably smudge your camera again and again. So it might be tricky; but the facial and iris recognition are fast enough (more on that in our detailed reviews for the devices).
The phones come with IP68 dust and water resistance up to 30 minutes – so you can be a little cavalier with the handling, but don’t go deep!
I’m not going to cover every bit of the hardware and software mix that the Galaxy S8 and Galaxy S8 Plus carry – I’ll leave that to our detailed write-ups for the devices. I only want to let you know about the things that has the world worked up into a frenzy, at this time.
Apart from the details, the launch event for the phone left out two things – the prices and the availability dates.
Estimations are already rolling. In India, the phones can be expected to crank a mean INR 50,000 for the S8, and upwards of INR 60,000 for the bigger S8+.
The release date is supposed to be April 21st in the U.S., but India might have to wait a little bit.
We also don’t know if us Indians would get the Gear VR headset with Samsung’s new wireless controller and an Oculus game pack for free, as the preorders in U.S. are receiving, but I’m pretty sure, Samsung would have something fairly for for us, the world’s second largest smartphone market, too.
It is Samsun’g vehicle for their redemption – and no one is taking it lightly, least of all, Samsung.
All good things are made up of small bits that add up to a wholesome package, and an operating system is no exception. With the release of Apple’s iOS 10.3 on March 27, one notices the value of small cog wheels, that don’t seem much in seclusion but make our life a wee bit easier.
One of the biggest changes in the iOS scenery is that Apple has moved to a new type of file system. Prior to iOS 10.3, the de facto file system on iOS was the popular HFS+ one. Now, with the latest release, Apple has adopted its own Apple File System (AFS).
Advantages? Yes, there are many. And so are one or two disadvantages.
The move from HFS+ to AFS provides better optimisation for NAND flash storage and SSD storage, more accurate time stamping and support for stronger encryption.
Come to think of it, HFS+ with its stacks of 30 years of legacy, was a historical artefact that was waiting to be changed.
Well, change comes with it baggage too. Once you decide to update your phone to iOS 10.3 the entire file system changes. And you can’t go back to yesteryear-status.
Let us understand the gravity of this move – as most of you would know, data is made up of packets and those packets make an imprint. The imprint has its own pattern which helps in retrieving data from memory, and even in the super-important data recovery, should the memory or device malfunction. Now, if you move to iOS 10.3, it will change the file system – so, the data packaging and the patterns of the old system will be deleted and the new one will be overlaid instead.
So there can be no recovery. The lack of recovery options means that you now have to backup your data prior to updating the OS. You can do that through iCloud or manually to iTunes on your PC/Mac (we prefer the iTunes route – it’s faster and you can make a copy of the backup and put it on an external drive for safekeeping – such that it doesn’t get overwritten by subsequent backups).
Well, here’s a list of what else is new with iOS 10.3:
Find My AirPods:
Only for those who own the precious nifty new AirPods, this feature enables a tracker – well, sort of – for you to find their location.
Fact is, the AirPods themselves cannot connect to GPS or any network (Apple either forget about this need, or ran out of space in the diminutive earphones to shoehorn in the requisite hardware).
So this new Find My AirPods option (which sits within the Find My iPhone app) allows you to locate and find AirPods lost within your vicinity (say in your couch or left behind in last night’s jeans). If you’re within Bluetooth range of the AirPods, it’ll play an audible beep through the AirPods and even guide you to their reclusive hiding place via an on-screen map.
If they’re out of Bluetooth range (in your car parked in the basement, or at last night’s pub), the app digs into it’s internal logs and points out where it last logged the AirPods’ location (via a map). Best of luck though – these buggers tend to become reclusive often – hence my longstanding advice: carry them in their case at all times that you aren’t using them!
App Transition Animations:
What is a good change? One, which you don’t even notice!
Apple has made some tiny adjustments to the flourishes and animations that iOS makes during its app-transitions. Now, when the app opens and shuts, the edging in the animation is chiseled to have a soft-round outlook. But the most important part is that the animations have been shortened and made faster, which makes ziplining between apps feel like a breeze!
Revised Apple ID Profile:
This setting profile is made like a Master Log of a ship – with all the information about your Apple ID (and iCloud) included in one place.
So your contact profile, security settings, payment method, iCloud usage details, App Store settings and even the Family Sharing settings are now on a singIe page. It also keeps a note of every device that’s currently signed in on using your Apple ID.
With the 10.3 Apple has fixed some gaps in the Safari browser, as well as some ‘backdoors’ that had the potential of being exploited by hackers.
There are quite a few other things that have been implemented through iOS 10.3 but we aren’t going through those bit-piece here. We’ll probably write you a more comprehensive article at a later date.
As we emphasised twice earlier, these changes are small and they probably will go unnoticed by most users. That said, it’s a good idea to update your phone – for battery improvement and faster interactions due to leaner code and bug fixes, if nothing else!
Facial Recognition Will Help Doctors Detect Rare Genetic Disease
While facial recognition is still catching on, and has so far been used only for eliminating security concerns, a few scientists at the National Human Genome Research Institute have had the brainwave to use this nascent technology to diagnose a rare genetic condition called DiGeorge Syndrome.
The DiGeorge Syndrome is quite hard to diagnose in the first place, as it is caused by the deletion of a tiny segment in Chromosome 22. While microscopic in nature, this deletion leads to a number of medical complications and cognitive conditions. The disease can be associated with multiple defects throughout the body, including cleft palate, heart defects and learning problems, which make it extremely difficult to identify and diagnose.
It also comes with a characteristic facial appearance, but that too, becomes difficult to identify because it varies with ethnicities.
“Human malformation syndromes appear different in different parts of the world. Even experienced clinicians have difficulty diagnosing genetic syndromes in non-European populations“, explains NHGRI Medical Geneticist Paul Kruszka.
This is precisely where new technology of this kind comes into play.
The NHGRI team started by studying 101 photographs of people with the rare genetic disorder from different ethnicities from Africa, Asia, and Latin America. Over the course of this study, they were able to develop facial recognition tech which was tweaked and improved several times over – till it was able to correctly identify the syndrome 96% of the time.
A number that high is quite commendable, given that most pathological tests also give results that are reliable only in the ballpark of that number, or less, in many cases.
The digital facial analysis technology was developed by Marius George Linguraru of the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Health System in Washington, DC. The team believes that tech of the kind can also be used to detect other disorders such as Down’s Syndrome, which also gives innumerable sufferers an extremely hard time.
The tech, however, is still under the deep cover as it is being tested rigorously before being considered as a viable tool. With time and more effort, the expectation is that the team might be able to develop the tech to a point where it can actually contribute to healthcare providers around the world.
A future where a doctor can diagnose such a rare disorder simply by running facial recognition on a photograph is indeed a future we look forward to – it may see and identify ailments that may otherwise be overlooked in daily life. More than anything else, it will certainly save many from a delayed identification , and will alleviate a painful experience, to the extent possible.
Wi-Fi Calling as a technology, is quite a muddled concept. It’s not new, but is vaguely there in people’s heads. Others confuse it to mean nothing new, as it sounds a lot like calling people while on Wi-Fi. It’s not!
A lot of people confuse it with the voice and video calling facilities that apps like Skype, Google Hangouts, Facebook Messenger and WhatsApp already provide. Wrong!
Let us help you understand what it really is, right here, right now.
First up, Wi-Fi Calling is different from Skype or other apps in the sense that while it does utilise a wireless network instead of using the carrier’s telecom network to make calls, it does so using your regular GSM/CDMA mobile number.
Second, you can call people on their phone numbers instead of their Skype ID etc. So, it integrates into your regular calling workflow beautifully – you may not even have to alter the process of calling someone in any way.
Third, Wi-Fi Calling is a hybrid product that melts together the Wi-Fi and GSM/CDMA network so that calls initiated on one of the technologies can shift to the other technology automatically, should you lose network/connectivity on the originating technology. We’ll cover this a bit later in the article.
Fourth, Wi-Fi calling needs to be supported by your Telco (you’ll see why a little later in this article). Most flagship phones from major brands have Wi-Fi calling services baked in, out of the box.
Fifth, you may use your regular phone dialer for making the calls – no need to fire up a third party app (like like Skype and WhatsApp) to make the call!
What’s the benefit, you ask?
Well, calls are free, by and large – though there may be some riders from your savvy telco’s side. Verizon in the U.S. for example makes all Wi-Fi calls to U.S. numbers free, even while travelling internationally, but not calls to any other countries.
Telecom companies these days, are embracing Wi-Fi calling themselves, perhaps because they want to scale up their network coverage and provide greater user experience to their customers.
If you’re in a country that Wi-Fi Calling has been enabled by your Telecom carrier, you can set Wi-Fi Calling as the default mode of placing a call – so you’ll save on telecom bills, as well as ensure that you don’t drop calls if the phone loses signal.
We’ve all been in situations during in our lifetime where we were in desperate need of carrier network to make a call but couldn’t. Life made easy right?
Also, again since the service is baked into the phone unlike third party apps, you don’t need to update your contact list on a third party app (like you have to with Viber or Skype where you actually need your friend’s Skype ID).
The best part – since you’re calling your friend on her regular phone number, she can receive your Wi-Fi call without downloading any third party app!
If these aren’t reasons enough for you to believe that Wi-Fi Calling is way better than telecom network based calling, then let me put it out this here – you can save money!
You can save your precious Data plans and live like a boss even when you are broke.
The most interesting part in Wi-Fi calling for me, is that you could commence your calls at home or office, using the Wi-Fi network, and then step out to your car or for a coffee, and the call will seamlessly transfer to a 4G telecom network! The hidden lynchpin in this is that your network needs to support VoLTE (also called 4G Voice). And those networks are becoming available in more and more locations and countries.
Before I forget, such calls will be able to easily transition back to Wi-Fi too, when you return to the Wi-Fi umbrella.
I can hear the question from you – How is this different from something like FaceTime Audio? Well, not very much, exactly. Except that on FaceTime Audio, you’re running on Wi-Fi, and then 4G Data when outdoors. 4G Data is sometimes not as abundantly available as 4G Voice. So you may lose the FaceTime Audio call in some cases (and will definitely experience buffering/stuttering as you make the transition between calling technologies). With Wi-Fi Calling none of that should happen, ideally.
So, imagine this – you’re getting out of office, running late, an old friend calls from overseas. You can still get out of the office if you’re on Wi-Fi Calling. Once you reach home, the call will automatically switch to your home’s Wi-Fi network – no interruptions!
In fact, the very fact that you can make calls even within a weak carrier coverage area is reason enough to start using Wi-Fi calling!
There are four major carriers in U.S. that provide built-in Wi-Fi Calling. These include the likes of T-Mobile, Sprint, AT&T and Verizon. But these services are also reaching other providers like Republic Wireless and Google Project Fi.
Bear in mind, all these carriers provide Wi-Fi Calling only on certain, specific phones.
Republic Wireless carries nine Android handsets, T-Mobile has 27 smartphones that support this option. As for Sprint, Wi-Fi calling is available on a number of iPhone models that run iOS 9.1 or higher. Several Android devices have the service as well, but you’ll need to go through your handset’s Settings menu once to see if your device is eligible or not.
AT&T offers Wi-Fi calling for eight handsets, while Verizon has 14 phone in its Wi-Fi lineup.
India will hopefully get the same love soon, but only through Reliance Jio.
Jio is the only Indian telco whose network is built on VoLTE technology (since it just built it’s entire network ground-up) and thus carries true 4G i.e. 4G Voice and 4G Data.
Others like Airtel, Vodafone and Idea are saddled with old world technology that only delivers 4G Data (and not 4G Voice) as they are built on costly legacy 2G circuit switched networks).
So, if you’re in India, there’s another reason to consider getting on to Jio – though I’d wait a bit, considering Jio’s immense network issues that are currently plaguing it’s network’s calling capabilities. I not being cheeky! Baby steps, is all I’m saying.
Hopefully, this (rather long) tutorial helped!
After so very long, Apple has done a “… One more thing”!!
Unexpectedly, out of the blue, well… red, there now is a new iPhone in town – and it’s Red!
It’s not a tinge of red, not metallic pink, it is all red. The back, the buttons, the fiddly little nano-SIM tray – all red!
There’s so much red on it, that you almost don’t notice that the front is white. The thing I love most about it? The silver Apple logo around the back. It just shimmers and pops against the gorgeous red!
Why am I gushing? Well, time to be honest – there’s never been any other exciting colour on iPhones, since… well, forever.
They created Rose Gold (and every other brand suddenly followed suit) – I know, I know. But for some reason, the pinkish phone never really struck my fancy. It was too, well pale and subtle. There’s never been a stand out, “look at me” colour, on an iPhone. Nor a cheerful one.
Part of the (RED) campaign to help fight HIV/AIDS, this phone puts the focus right back on the noble cause, and how much some of the largest people and brands are committing to it.
A portion of the proceeds from every sale go toward the Global Fund, a group committed to fighting AIDS around the world.
I don’t know if you know this or not, Apple has been working with PRODUCT(RED) for about ten years now.
“For 10 years, our partnership with (RED) has supported HIV/AIDS programmes that provide counselling, testing and medicine that prevents the transmission of HIV from a mother to her unborn child. So far, we’ve raised over US$130 million through the sale of our (RED) products. Now we’re introducing iPhone 7 (PRODUCT)RED Special Edition. Every purchase brings us a step closer to an AIDS‑free generation“, (RED)’s website states.
So, all the iPhone 7 (PRODUCT)RED bang is for a good cause in a way.
If you’re still in the dark about the changes on this newest iPhone – well there aren’t any others. Except for the exterior colour, nothing else has changed on the devices. All the interior hardware and functionality remain the same – which is alright, because we’re sure they already have the iPhone 8 (or whatever it’ll finally be called) in the pipeline, and most Apple users would be expecting the real changes there. I don’t think the world would’ve settled for just this novelty on the iPhone 8.
Who all got this treatment? Well, only the iPhone 7 and the iPhone 7 Plus. All the other iPhones bear only the existing livery – clearly Apple’s giving you yet another reason to ditch that old iPhone!
Speaking of which – will folks ditch their current phone? Well, to be honest, I and my boss, almost did. Instantly.
Realistically though, we do not have a clear answer to how many people would trade in their current devices for a ‘novelty’. We do have some speculation though – much like the iPhone SE last year filled in the gap in the 12 month product cycle, this novelty iPhone too, is kind of filling in the blank pause, till September 2017.
A lot of smartphone brands launch multiple models through the course of the year, but Apple doesn’t like to satiate demand that’s building up during summer. They let it simmer till Fall. But with mid-term product launches two years running, maybe, just maybe, they’re tentatively agreeing to the notion that 12 months becomes to long a period of silence.
Further, while the new Red iPhone may be more a product of vanity than a product of genius, it sure does fill in another void nicely – that left behind by the Note7 crash-and-burn. In fact putting out a ‘new’ variant just before the Samsung Galaxy S8 has even been formally announced, may be another show of genius on Mr. Cook’s part.
He is known to be a shrewd operator, no matter what, who says!
Moving on, also released alongside the iPhone 7 PRODUCT(RED) was a 9.7 inch iPad that comes with an upgraded processor, and a dramatic price cut. The new iPad starts at USD 330 (for 32 GB), down from USD 400 previously.
Apple also increased the minimum storage on the iPhone SE to 32 GB (up from the previous lowest of 16 GB), without increasing the price.
There’s more. The new iPad Mini will only come in a 128 GB model, and that certainly is a lot of memory capacity for a tiny tablet!
You can get the iPhone RED starting March 24th, at USD 750 for the 128 GB iPhone model and at USD 870 for the 128 GB iPhone 7 Plus model. If memory’s failing you, these are the exact same prices of the non-RED models. Sweet!
Last, and I particularly love this part – for the first time ever, the new model launches in India, on the very same day as it does in the U.S. Now, that makes me proud! Apple finally proves that India’s just as important as the Dollar economy!!
Do you know that warships play a whole diverse set of roles – ships, rescue vessels, museums, offshore data centers, and at the end of their floating lives, they are sometime sunk to become homes for fish and to encourage coral growth!
Thing is, they make a great platform for all forms of activities – most of which aren’t in any way related to the warship’s original raison d’être. A platform by the very definition of the term, implies a vessel/asset that has multiple uses and applications.
Facebook is one of the world’s greatest IT platforms, and it’s turning the screws on something new – in yet another attempt to coalesce into our daily lives even more ubiquity, Facebook is doing something it does best – enable internet engagement by zapping technology out of sight, leaving you with a sexy looking container that draws you in.
Facebook is now using a new type of video to advertise products, that is designed to draw customers in and help them shop easier.
Called Collections, Facebook’s new ad format aims to make e-commerce easier, for both, the retailers and the consumers. Considering that over a billion users actively prowl Facebook, why not bring them stuff they’d like to buy, showcase it beautifully and make it immensely easy to seal the deal?
Facebook’s video ads have the ad sitting on the top half of the screen, while the bottom half carries the recommendations for the products.
When the customer clicks on the recommendations below, it takes them to a fast loading landing page, where people can browse through up to 50 products. From there when a person selects a product, they are taken to the retailer’s website, or a third party entity, from where the product can be bought.
To gain popularity with the retailers, Facebook allows the retailer to either manually choose the products to be featured, or Facebook can pull popular products from a retailer’s site.
What Facebook here seems to be doing is simply appealing to the visual aesthetics of people, with the idea that more interactive content might elicit a strong buyer-response.
That said, an ad could also be an image (and not necessarily a video), however it is quite clear that Facebook considers this to be a video service.
“Three in four consumers say that watching videos on social media influences their purchasing decisions, which is why we designed collection“, a Facebook spokesperson said.
Reportedly, Adidas, Lowe’s, Tommy Hilfiger, Sport Chek and Michael Kors are amongst the first to test out the format. Being the big brands they are, they should be crowd-pullers.
But how successful the format will actually be, can’t yet be estimated.
There’s a reason for this pessimism.
Back in 2009, when Facebook first considered itself as a possible retailing store, 1-800-Flowers, a brand synonymous to sending flowers and gifts, got onto Facebook and established it’s online store on the fast-growing platform – assuming it was going to be another success.
1-800-Flowers had been successful almost everywhere else that it’d set up shop. They had pioneered new ways of retailing – a toll-free number, direct sales via the internet and immensely well supported service channels. But they soon discovered that it was going to be a tough sell on Facebook.
“We were one of the first to actually have a Facebook store, and we did have big expectations, but it turned out to be not very successful”, recalled Jon Mandell, Vice President of Marketing at the flower and gift seller.
That, in short, seems to be the story of e-commerce on Facebook, so far. For all the predictions and aspirations, Facebook has never really been able to even get close to becoming the e-commerce behemoth, that market gurus once predicted.
Yet, it sure has not been for the lack of trying.
Facebook has been trying to get people to shop off of their News Feed for years now. It started out with their Buy Button, then went ahead on to in-platform stores and chatbots. They even managed to launch a Craigslist competitor, which also didn’t go quite so well for them.
Well, Facebook now seems to be bringing e-commerce across their social media presence – Instagram too, not too long ago received a new feature that lets apparel, jewelry and beauty brands tag their posts with shoppable links.
This again is an encouragement to the users to buy things then and there, right from within their Social feed.
There’s more – Facebook is also testing is a new metric for Collection and Canvas campaigns – they call it Outbound Clicks. It provides Facebook’s advertisers stats on how many people clicked through to the brand’s website or app. These are in addition to the stats that measure how many clicked on an ad from the News Feed – which are already provided to the advertisers.
With these new reports, advertisers will now also be able to track how many people looked at the interstitial page and then clicked again to visit the retailer’s own site. This should give them better insight into what is appealing to the end audience, and what is not appealing enough.
Like someone said, the most intelligent minds in the world today are thinking how to make you click on ads and buy things.
The question, however, stays: how successful will this be? Will Facebook be able to get it’s readers to finally make the jump between surfing to shopping?
Let’s just wait and see! If you’re a business though, you might want to give this a try – 1 billion plus people, is a good, rife-with-potential market, no?
Demonetization taught India how to live with Digital Payments. And while Paytm made a strong case for onboard (digital) wallets, it was still a bit of a convoluted process to first upload money to it’s Wallet and only then be able to spend the money.
Uber and others did allow the addition of credit cards to their apps, but even those required an OTP or password to be entered, to be able to spend money.
All of that said and done, India tasted digital payments and wanted more. Life had suddenly become easier – the need to visit an ATM, or a bank, or ask Dad for a loan of currency notes, was gone. Every merchant large and small was suddenly amenable to digital payments.
Plus, with Apple Pay and Samsung Pay already thriving in international markets, it was just a matter of time before they arrived here.
Samsung today launched it’s payments tool, called Samsung Pay in India, beating Apple in the race to reach Indian consumers in this new, burgeoning space. And it’s exciting, because it simplifies life, and because it involves a bit of magic (you’ll see)!
What does it do?
Samsung Pay is a new digital payment service that absorbs all your credit cards, debit cards, and electronic wallets into one umbrella, which you can then use via your Samsung smartphone or smartwatch.
In simple terms, it replaces your plastic cards for transactions that you’d have made through swipe machines. It does not work for Online payments (i.e. websites or apps) just yet – though that is conceivably only a matter of time.
So, the obvious question is, how does it work?
Well, for starters, if your phone is one of the devices listed below, you will have to first install a service update which should be available over the air. Just head to the Settings section of your Android device and check for updates.
Here are the devices that are currently able to work with Samsung Pay in India:
Once your device is updated, you’ll be able to connect your payment method to the Samsung Pay application. This can be a card (credit and debit cards) or an electronic wallet (like Paytm) which will be saved to the device post verification.
No, really, how does it work? Won’t all merchants need new machines – which means that it’ll take 15 years for Samsung Pay to become usable?
A lot of things kill the acceptance of new services – complexity (during set up or usage), the need for new hardware (at the merchant or user level) and limited acceptability (remember how many merchants gripe when you want to use your Amex?).
In fact, the reason Paytm succeeded was exactly because it skated around all of the hindrances – it was ubiquitous, tremendously easy to use, and most importantly, because everyone was happy accepting payments through it.
There’s some magic in Samsung Pay!
Samsung has been truly brilliant with their approach. Knowing fully well that India (in fact almost all countries in the world) would take many years to change the current credit card machines to become NFC-capable, Samsung created and patented a technology that actually enables the Samsung device (smartphone and smartwatch) to mimic a magnetic card (like your credit or debit card).
Called MST (for Magnetic Secure Transmission) this patented technology replicates a card swipe by wirelessly transmitting magnetic waves from the supported Samsung device to a regular card reader. So, MST turns virtually every card swipe machine in the world into a contactless payment receiver, without needing any additional hardware or software upgrades!
Not only does Samsung Pay work with MST, it even uses the more advanced NFC protocol (when the device is placed near an NFC reader). Unlike MST, NFC works via Radio waves and requires a specialised “receiver” in the receiving machine,
They are both secure transactions, and both do not need any “physical” connection with the payment receiving machine.
Samsung’s ingenuity of allowing both, MST and NFC enables almost all merchants across the globe to accept Samsung Pay, thus making it one of the most accepted mobile payment services on the market.
So… you should be able to use your Samsung Pay-capable device anywhere you like in India (and 11 other countries) starting today (though some merchants may not be aware of it for a while). Expect stares, incredulous looks, double checks and many questions from bystanders too!
To use Samsung Pay, once the merchant has input the amount to be paid on his credit card machine or NFC terminal, all you have to do is swipe up from the bottom of the screen on your device, choose one of your saved payment instruments and then bring your device close to the payment machine. The phone should automatically connect to the merchant’s machine, and you should be able to see a prompt on your device, indicating the demanded amount. All you then have to do is enter your PIN as if you were swiping a card, and hit “Pay”.
The machine should start spewing out the paper receipt shortly (post approval from your card issuer). That’s it, you’re done!
Which all card issuers honour Samsung Pay in India?
The service will be available for users of Visa, Mastercard, Amex and Rupay payment cards, for now.
Banks wise, ICICI, HDFC, Standard Chartered, SBI, Axis Bank cards are already supported. As is Paytm!
We’re hearing that UPI (Unified Payments Interface) and Citibank cards will soon be supported too.
Thus, this should be quite a functional service in metropolitan areas in a country like India.
Why you should use it.
First, there’s no need of taking out your card (and inadvertently leaving it behind at the merchant’s location) or even showing it to the waiter/cashier (since your card’s security number is visible at the back).
Second you don’t have to carry your wallet everywhere.
Third, in addition to the ease and comfort, the service also offers promotions from banks on rewards points and offers from Paytm as well.
There don’t seem to any additional charges that Samsung is levying for using the service.
The application also comes in with built in support, in case you are lost or need help with the use of the service.
All this makes for quite a tempting package!
What’s in it for Samsung?
The launch of Samsung Pay at this time can be expected to give Samsung the first-mover’s advantage in the Indian market – a market that is the second largest smartphone market in the world, and where the South Korean megabrand already has been the leader for quite a few years.
The service was first launched in South Korea in 2015 and is currently available in 12 countries including the US, China, Spain and Australia.
Yet, (and I particularly love this part) it took the company about two years to bring the service to India, despite the leverage the Indian market holds for the company. This was perhaps because the Indian market is still pretty traditional in its actual workings, and so are the concerns of the possible Indian users.
“We focused mainly on the barriers which were holding back people from going digital. We picked up the key themes centric to the Indian consumers — technical issues, security concerns and the lack of acceptability presence, and then integrated mobile wallets, UPI (Unified Payments Interface) and debit cards to Samsung Pay. The idea was to make in India for Indian consumers,” said Asim Warsi, Senior Vice President (Mobile Business), Samsung India.
What is noteworthy is that Samsung has actually been stretching itself pretty thin for bringing this service to the Indian market. They have worked to include debit cards, and electronic wallets within the Samsung Pay ecosystem, where these are not options that are available on an international level. Clearly, these have been integrated specifically keeping the Indian user and our market’s dynamics in mind.
Now that the Samsung Pay genie is out of the bottle, the next few months should tell us how the Indian market responds to Samsung’s hard work!
Go out tonight, give it a try, once you’ve set up your Samsung device for this new service! Me, I’m off, hunting for a store that’ll swap my Windows 10 phone, for that delectable Samsung Galaxy S7 edge! Or should I wait for Apple? Hmm…
Qualcomm 205 Mobile Platform Would Enable A New Industry - Simpler Phones.
Technology, just like fashion, is often accused of moving in a circle – the old is not just relegated to antiquity, it often, subsequently becomes a template of pragmatism.
Such is the case with smarter phones too. They arrived at the scene with the promise of supplementing computers for basic tasks like email and scheduling, but with time (and horsepower), became far more than that. Smartphones arrived – and over time, became even more powerful and convenient, advancing more regularly and rampantly than computers.
Today, we stand at a place where phones get more attention, updates and upgrades than computers do! Case in point, the yearly booster shot that iPhones receive vs. the once-in-five-years notional update that MacBooks get.
That said, there is App Fatigue, Notification Fatigue and Smartphone Aversion that’s quietly mushrooming with smartphone users. A lot of people are (often) secretly yearning for lesser-capable phones.
Thus, brands like Nokia are revitalising their older models, in a bid to relaunch themselves with the promise of a simpler life, through simpler phones that do all the critical stuff. Other brands too have espied the opportunity and are following suit.
Being one of the industry’s critically limbs, Qualcomm obviously has it’s ear to the shop floor -and is thus well aware of this undercurrent of customer sentiment.
It’s no surprise then, that Qualcomm’s jumped on the “simpler phone, rebooted” bandwagon.
The company has decided to put its money on the low-end phones. Dirt-cheap to the level of almost-disposable, these phones are still one of the major links of contact in countries like India.
Especially in poorer regions, these phones and their knockoff counterparts chalk up a sizeable chunk of total sales, till this day. If you continue to be skeptical about that statement – if feature phones’ market share is not going up, it is not going down as well.
If you insist on needing statistics – 56% of all phone sales in India are ‘Talk and Text Phones’ (Data from Industry research firm IDC Corp.). Compare this to another developing hotbed, Vietnam – where the feature phone sales are almost 49%!
At the same time, smartphone shipments’ growth has stagnated at just over 3% in 2016.
Turn that into a corollary – a serious amount of money can be made.
Moving to Qualcomm. Well, this is one company that has been everywhere in so far as mobile devices are concerned. While, up until now, it was known for its high-end processors like those of 4G LTE enabled smartphones, Qualcomm just announced its chip for 4G LTE enabled Talk and Text enabled “feature phones”.
There’s more to Qualcomm’s roadmap. They are building a universe that’s more of a platform than just processors. Backed by Qualcomm’s Linux-based operating system, these chips will add certain “smart” features to their low-end host devices.
There’s an intent to create a super-soldier of a phone. Apart from designated apps that can add some smart functions, the phones will also support mammoth batteries to enable at least 45 days of standby time, 20 hours of talk time and almost 100 hours of music playback.
The arsenal of these simpler phones will be long, and quite unexpected too.
A theoretical download speed of 150 Mbps, 50 Mbps upload speed, Voice over LTE (VoLTE) and Voice over Wi-Fi, Dual-Sim support coupled with Bluetooth 4.1.
So, you shouldn’t make the mistake of thinking of this simpler phone as your backup phone – it could unflinchingly perform the critical role of being your main phone as well.
So, why 4G LTE? And why not make these phones a full-form smartphone?
I know I touched upon this earlier, but that was more from a simpler phone lens. Well, given the geographical areas that are exclusively buying only feature phones these days – manufacturers can easily gauge an aversion toward the perceptively “complex” smartphones.
The constant clamour for OS & app updates, the echoes of unreleased beta versions, complex and detailed UIs and most terribly, the inexplicable battery depletion – all become obsolete with simpler phones. This also paints the internet features as “unnecessary”.
Catering to consumerism isn’t always about giving the buyers new stuff – sometimes it is about giving them something that they don’t need anything anymore! Increased costs are also a problem.
So, as these feature phones gain traction, they would act as a bridge between those moth-eaten 2G flip phones that the world discarded, and their more expensive (and more fallible) 4G counterparts.
With Qualcomm’s 205 Chip package to be released in the second quarter of the year, one believes that Qualcomm would be assured of having a little bit of edge over its rivals. The company might face a problem of developing components at a really low cost, especially the voice over LTE component. But rest assured, if the company manages to achieve this feat, then it might possibly propel a “smartphone aversive” Indian farmer in Bundelkhand, or a homemaker in Bhopal, to a new understanding and appreciation of technology – one, that is both easy on the pocket and reflective of a sound consumer-centric all-in-one maxim.
And speaking for myself, I might just jump on the simple phone bandwagon too, to un-mess my life. What about you?
LineageOS Quite The Rage: More Than A Million Users In Just 4 Months
Back in December of 2016, we’d written about the LineageOS – you should read that article too, to gain an insight on this story too.
Well, simply put, there was an internal rift between the brains behind the CyanogenMod platform, and the company’s management. More specifically, the rift was between Lead Developer Steve Kondrik and the then-CEO Kirt McMaster.
According to talk at corners, the rift had been triggered by the failure of talks with OnePlus – bringing out differences in the visions of Kondrik and McMaster.
The rift turned public and finally led to a split and some re-org with Cyanogen.
After the split, Kondrik left while McMaster was removed from the position of being the Chief Executive Officer.
Despite all this, the source code of the CyanogenMod remained online and available to the world – for people to tweak, improve and to come up with new ideas.
A few days later, voices from the CyanogenMod developer’s team suggested that a new fork for the CyanogenMod named LineageOS, would be available to users – developed by folks from the original development team, including Kondrik.
Since LineageOS’ launch in January, the firmware has gained massive traction – it reached 500,000 users within a month; as on date, it is being used by more than one million users!
Ironically, LineageOS has been downloaded the maximum number of times on OnePlus One phones! The next highest beneficiaries are owners of Samsung’s 2012 flagship, the Galaxy SIII. This factoid substantiates that developers’ notion that older-model phones can ply the firmware and can actually even witness better performance with the LineageOS installed, than the original OS that came with the device(S).
You mayn’t believe us when we tell you that the third position in the downloads list were the OnePlus 3 and OnePlus 3T phones! Proving thereby, that LineageOS is not built for archaic devices alone. Contemporary ones too, can enjoy the ride.
So clearly there is a strong compatibility when it comes to OnePlus phones and LineageOS. This is primarily because, OnePlus allows their users to root or unlock their device’s bootloaders without voiding their device warranty so that custom ROMs can be run easily and efficiently.
And… OnePlus had strong ties to the Cyanogen Inc. team, having partnered with the company for the software on its original phone. OnePlus later allowed users to choose which software they’d prefer to run on the OnePlus One – its own OxygenOS or the original Cyanogen variant – after breaking up with the company.
Flashing ROMs has been a core part of the OnePlus experience for some users and these adoption rate implies that Lineage OS works perfectly well on such device – contemporary or archaic.
What does this mean for LineageOS?
Well, at its prime, CyanogenMod charted numbers of close to 50 million users. Given that the developers are the same, and the partnership with OnePlus can still get traction and tapping into the Indian market can result in a promising future for both the companies – things can be quite rosy for this up-and-coming OS.
And if they stay with the constant improvement spree they’ve been on, a 100% growth every month could be possible – and should that happen, it would make LineageOS the talking point for every phone manufacturer.
IBM Brings A Weather Alerts System to India That Doesn't Need the Internet
The mother of invention, Necessity comes to our rescue in a myriad of ways.
At times, India’s spotty telecom networks put us in harm’s way, or at the very least, inconvenience us when we need technology the most. Case in point, you could be planning to head out for a day-long trek during your vacation, but being in the mountains, you’d most often not be able to check the weather in order to make your dressing decision for the day.
IBM’s coming to your rescue. It recently launched India’s first mobile alerting platform that will deliver weather alerts, even when you’re off the network, with no access to the internet or even Mobile Data.
Taking inspiration from a messaging app called Fire Chat (that was released in 2014), IBM’s technology uses a peer-to-peer mesh network to send critical weather information to people in disconnected areas.
Each smartphone in the service’s network becomes a node that stores the message containing weather information, and then securely passes it to the next nearest device. Thereafter the next device passes the information on farther, such that these “packets” of information reach the “disconnected” recipients through proximity based communication.
A chain of sorts is thus built, and the need for a mobile network is eliminated.
Mesh networks enable devices to communicate with each other, without the need for a cell tower infrastructure. This is especially helpful in remote areas and other places where network connectivity otherwise is quite poor, or in the case of mass-scale outages of the cellular network.
For the moment, IBM is going to use this technology to impart information about the weather, but other applications of the same mesh network could easily be implemented in the future.
I know you may not consider weather to be critical information, but trust me, it can be crucial in remote areas, especially the hills or areas that are prone to tornadoes, or landslides, or heavy rains. Timely information will not only help people prepare for the impending extreme weather but also perhaps at times, be critical to their livelihood and well-being.
“Mesh Network Alerts networking technology is appropriately designed to notify of potential severe weather events or disasters – even in areas with limited Internet connection, or cellular networks are disrupted due to an outage,” said Himanshu Goyal, India Sales and Alliances Leader, The Weather Company. The Weather Company is a subsidiary of IBM businesses.
The mesh network is designed for low-bandwidth environments, but is still able to offer the same high-quality experience, and deliver information, maps and alerts from The Weather Channel quite effectively.
Such services could be quite useful for a place like India, where more than half the population has no access to internet, and where a lot of the terrain is not easily navigable. Places and people living in the northern, and northeastern mountains, have long been impacted by their environment’s resolute ability to impede telecom companies from building adequate cell tower infrastructure.
“Mesh Network Alerts can help send notification of an upcoming disaster that could help people and their families stay safe. It’s a matter of great pride for us as this technology is first introduced in India“, Goyal added. Those words alone, should justify all manners of support for the service.
Nvidia And Bosch Team Up To Build An AI Supercomputer For Your Self-Driving Car
It is starting to seem like hardly a day goes by without some stunning news in the automated automobiles sector coming out of the shadows. This time it is Nvidia and Bosch’s turn – names you may not really have associated with the automation industry per se.
A recent announcement made by the two companies stated that they are collaborating on creating an onboard computer capable of running the Artificial Intelligence (AI) necessary for self-driving.
The collaboration between the chip maker Nvidia, and automotive supplier Bosch clearly arises with their aim to develop an AI-powered computer intended to make self-driving cars a mass-market product.
This computer could estimably be built upon the solid processor that Nvidia’s upcoming Xavier chip is expected to be.
Based on Nvidia’s current Drive PX technology, which also currently powers Tesla’s autonomous vehicles, Xavier is stated to be capable of 20 trillion operations per second while drawing just 20 watts of power.
What this truckload of jargon basically means is that the automobile-computer that will come into existence as a result of this collaboration should be smaller and cheaper than Nvidia’s current Drive PX 2 unit and beat it’s performance by a mile and a half!
“We want automated driving to be possible in every situation. As early as the next decade, driverless cars will also be a part of everyday life. Bosch is advancing automated driving on all technological fronts. We aim to assume a leading role in the field of artificial intelligence, too“, Bosch’s CEO, Dr. Volkmar Denner said in a statement.
“Self-driving car is a challenge that can finally be solved with recent breakthroughs in deep learning and artificial intelligence,” said Jen-Hsun Huang, Nvidia’s founder and CEO. “Using DRIVE PX AI car computer, Bosch will build automotive-grade systems for the mass production of autonomous cars“.
So what’s in the deal for the both of them?
Well, it’s quite simple. It gives Bosch a solid chipset to erect it is AI-powered self-driving capabilities upon. On the other hand, for Nvidia, the partnership with the automotive component giant would help it get a ready market for it’s newest chip and help integrate that chip into several automakers’ cars at immense scale. With this move, the company will be able to bring its upcoming chip to the attention of various automobile brands of considerable repute.
This comes after Nvidia, the chip designing and manufacturing company, managed to line up quite a few partners at CES this year, including the likes of Audi and Mercedes.
Nvidia, however, is not the only one trying to sail a boat in those waters.
Recently, Intel, another one of the world’s bigger chip makers, bought MobilEye for USD 15 billion. This was clearly based on the intent of developing self-driving software and hardware to use across auto brands. Given that MobilEye, for now, has about 70% of the market share in the supply of integrated cameras, chips and software for advanced driver assistance systems (ADAS).
Bosch with this partnership is only gearing up to give the Big Daddy a challenge.
All this said and dusted, and however fast these companies try to outrun each other, there is still quite some time before any of us will actually be driven around in autonomous vehicles.
In the meantime, lets sit back and watch this journey unfold.
Twitter Tests Tagging Profiles With ‘Potentially Sensitive Content’. But They're Making The Same Mistake.
I’m not going to wax eloquent about the current prevalence of fake news or objectionable content, except to say that it seems to be pervading all forms of media – journalistic and social, and it’s casting an infection shadow on a lot of people’s judgements.
We’ve covered the impact of fake news during the U.S. Elections and more recently told you about Germany’s government taking significant steps to reel in social media giants to clean up their slates – but the fact of the matter is that the everyday social media user (like you and me) seems to be becoming a skeptic. There’s a niggling doubt on what we should believe, and what we should ignore.
In the light of the criticism that they’ve been receiving over the matter, most social media platforms have started to take steps to try to put a lid on the problem.
The latest company to do this is Twitter.
Twitter has introduced a new feature, that they call the “Sensitive Account System“.
Twitter publicly flags some users’ profiles as containing “potentially sensitive images or language”, so as to help warn other users and ward off the aftereffects from the overly gullible.
What this also does is that it creates an intervention page of sorts – instead of this potentially sensitive profile being displayed directly upon the first click, a warning page will is inserted, and the user visiting the profile will have to click an Agree button to view the profile.
This move could be seen in multiple lights. It could be seen as a way to mediate the behaviour of “miscreant” users, so that once their profile has been flagged, they might try to be more careful about the kind of stuff they put out there.
This move could also be seen as a way to wall off these potentially sensitive accounts from the general Twitter populace – sort of like marking off potentially hazardous areas.
There are other, similar steps that Twitter has taken this year – like introducing a function that removes tweets containing potentially sensitive content from search results, and a 12-hours timeout for accounts that Twitter believes to be engaged in abusive behaviour.
Thing is, both these actions are enacted without the users knowing that they’ve been hit with the red card.
On the face of it, this new move of inserting a intervention page, seems to stem from a similar policy – of going behind the “miscreant” user’s back, instead of buttonholing him head-on.
Thus, even though the idea behind what Twitter is trying to do here seems good in theory, the move has been the recipient of criticism. This is so primarily because like with most of Twitter’s anti-harassment measures, there’s a noticeable lack of transparency and a fair amount of obfuscation as to how accounts are deemed sensitive.
It can be quite frustrating for a user to not know that or be notified that his profile has been flagged, so he will most likely find out in a very public manner – by others telling them so.
What’s worse, once Twitter tags or stonewalls you as such, neither is there a process to appeal, if you believe that your profile has been wrongly singled out, nor can you access or view Twitter’s review process that was used to flag your profile in the first place.
What is also quite glaringly unclear is the process that Twitter will use you mark such accounts as “sensitive“. Will it be based on other user’s reports, or some kind of an automated system that Twitter’s put in place or is a team that is going to be working on going through the content and thus deciding what is appropriate and what is not? Nothing’s really know about this machinery.
Thing is – like our website, and every other user-focused platform – the platform becomes the property of the users. Democracy, remember?
The lack of relevant information on the process opens up the possibility that well-meaning and non-abusive Twitter users could have their accounts wrongly flagged as sensitive if enough trolls report them, or if Twitter’s own algorithms mistakenly identity some shared images or videos as inappropriate.
It could then be a situation similar to what Twitter had to face quite recently when they made changes to the features of the public lists.
Twitter had made changes on the notification system of users putting other users on lists, and it was then forced to roll back the change, because it was ending up contributing to bullying, instead of helping combat it! If this new feature and the process behind it are not refined enough, the situation with this one could be much worse.
However, the good news bit is that the feature is still under testing, and has only been rolled out in certain parts yet, and not entirely. A Twitter spokesperson confirmed the new feature, saying “this is something we’re testing as part of our broader efforts to make Twitter safer”.
So if the menace that the feature is causing ends up being greater than the supposed good of it, we can cross our fingers and hope that Twitter does not make the feature a permanent in the packet.
All this being said, I must end with a clear statement of my own view – hate speech or objectifying people or spreading disinformation – all stem from mal intent.
Make no bones about it. Such people should be weeded out and made to stand in the proverbial corner. So each of you – be circumspect in how you express yourself – do so decorously, and politely, and most importantly – speak only when you’re sure of your facts.
Others in your life and those beyond your immediate circle of friends do read what you write and see what you post – so they’re judging you too, and the reputation that you’re making will remain in their, and the world’s internet servers’ minds much longer that you physically do!
So speak up – but politely, intelligently, and gently!
Google's New reCAPTCHA Automatically Tells You're Not A Bot
Google’s new & revised reCAPTCHA is invisible!
Implemented in 2009, reCAPTCHA has become the de facto tool for websites to distinguish bots masquerading as humans, from real-world users.
With it’s name derived from the product’s mission statement Completely Automated Public Turing test to tell Computers and Humans Apart, reCAPTCHA has over the years used various methods to make this distinction. This includes asking users to transcribe distorted words, confirm street views, identify pictures, or sometimes just mark a tick. So, those annoying gibberish texts you’ve had to type out at the end of forms, or the ‘which pictures have a street sign’ pop, actually played an extremely important role in keeping systems (and your accounts) secure.
Google’s reCAPTCHA is a rather hardworking tool, that thrives on its efforts to keep websites safe from various threats posed by rogue viruses, and bots on the internet.
Now, the hardworking system has seem a major upgrade to it’s habits and intelligence. Think of it as an intensive brain surgery…
Google has used a combination of machine learning and advanced risk analysis, to update the system to detect human habits without dedicated interactions. So, when you are basically going about your business on the internet, this upgraded system should be invisible, opening gates (sites and forms) for you, without your even realising how and when it’s doing that for you!
While your behaviour and basic interactions would be monitored, however only at a very superficial level – just to make sure you have warm blood flowing through your veins and not electrical electrons buzzing over silicon circuits!
If you do trip Google’s risk analysis algorithms, then the new system might ask you to solve a simple puzzle, just to make sure that you are human indeed (a lot like the current grids).
Google did not reveal much about how the invisible reCAPTCHA works, but just said their technology will “actively consider a user’s engagement with the CAPTCHA — before, during, and after — to determine whether that user is a human“.
Our guess is that the system probably analyses things like typing speed, cursor movements, and rate of scrolling to determine whether a visitor is a human or a bot. People type relatively slowly, rarely move their cursors in straight lines, and usually take their time scrolling through a website. Bots don’t work quite like that.
Additionally, we’re quite sure the new system would also consider other variables like your IP address and perhaps check for any historical misdemeanours from that IP etc.
Clearly, this simple change to the authentication process would significantly reduce the number of times that Google might have to engage with the users personally, thus enhancing their experience and making searches easier and quicker.
All in all, hassle-free internet journey seem like the end goal.
Slack where all work communication happens for most of the 21st-century organisations, dominates not only the enterprise setup, but also today’s startups, has some competition brewing.
Google and Amazon both are vying to take on Slack and carve out sizeable pieces of the burgeoning work space.
Amazon, last month acquired a startup called Do.com and converted it to Chime.
Just two days after this acquisition, Do.com on its official website announced that it had been acquired and would discontinue its service entirely on all platforms including its web, mobile and Apple Watch apps by the end of February. While Do never disclosed the name of its buyer but a hawk-eyed reader brought it to the attention of a news house that the company’s LinkedIn profile now mentioned that the startup was “now a part of Amazon Chime”. A similar change was noticed on the profile of Do’s employees as well, thus confirming the acquirer.
To add spice to the story, as soon as some of the tech websites published the news of Do being acquired by Amazon, Do quickly removed all the evidences that pointed towards this conclusion – the official blog post from Do that announced its acquisition was removed and the LinkedIn profiles that had given away the secret were also modified. But the cat was already out of the bag.
Amazon officially launched Chime on February 13th, as part of it’s Amazon Web Services’ cluster.
All this hush-hush and coyness reminds me of our Bollywood celebrities who attempt to keep their relationship status under wraps. Amazon and Do are now trying to conceal this association from the world as there is no official announcement at Amazon’s end and no one knows what part of Do.com was acquired and at what cost. Curiouser and curiouser!
What does Do actually do? And how is it going to help Chime?
Do.com was a startup that had created a platform that aimed increasing the productivity of meetings by providing services like managing advance notes made in preparation for the meeting and even generating notes for absentees.
Amazon Chime too, is defined in a similar fashion per Amazon itself – “(it is) a secure, real-time, unified communications service that transforms meetings by making them more efficient and easier to conduct.”
So, making meetings hassle free and more efficient is what Do will do for Amazon Chime.
Moving on, while Amazon doesn’t want to talk about it, Google on the other hand, is out there conjuring up major updates to Google Hangouts.
Google is trying to make Hangouts a more business-friendly product and considering that Hangouts already does a lot of what apps like Slack use to dominate the scene, there is clearly a need for Google to up the ante and tom-tom it’s wares and make Hangouts’ presence felt.
Accordingly, Google announced some changes to Hangouts at the Next Conference in San Francisco.
Google has bifurcated its workplace tools, dubbed as G Suite into two separate apps: Hangouts Meet, a videoconferencing app, and Hangouts Chat, a Slack-like messaging app designed for teams to interact professionally.
Hangout Chats’ new prime feature is all about group messaging – and to be yet more specific, team messaging. Chat has also incorporated threaded conversations without which any messenger app looks barren and which Slack hasn’t been able to perfect as yet.
Additionally, to make things easier and more efficient, Hangouts Chat will now be able to perform advanced search and be able to filter conversations by file types.
What’s more, the update also enables users to create virtual chat rooms which would be a one-stop place to hold group conversations. No points for guessing this one, just like Slack!
And obviously you can’t take out Google from Hangout Chats as the former’s services are deeply entrenched within Chat as when you share a file with a room, all of the members automatically get access to it.
All the G Suite customers who apply for access will be able to enjoy this new feature. But, for starters, Hangouts Chat will only be available to companies in Google’s Early Adopter Program, and as of now there are no clear indications as to which features will cost money and which ones will be free of cost.
Hangouts Meet, the other sibling is all about making your meetings hassle free.
With just a click, you’ll be transported into a meeting – be it an audio or video one.
Meet is combined with what Google likes to call a digital whiteboard, dubbed as the Jamboard, that enables users to easily collaborate and view Jamboard displays remotely.
Google claims that this rewritten version of Hangouts meeting experience will be lighter on the processor and would not gobble onto your laptop’s battery life either.
What I love the most about all this, is that the whole thing would work without any plug-ins and due to the cut down in size will load “instantly”. It use to irk me to have to download plugins on every new computer and click pop-ups to confirm Trust alerts. Hopefully, they’re gone now!
The reason why Hangouts has been bifurcated is because Google wants to take better care of its enterprise customers. Google does claim though, that these major changes to Hangouts are not aimed at overtaking Slack.
As per Prabhakar Raghavan, the Head of Apps Engineering at Google, Slack already integrates with Google Drive. “We don’t intend to take away from that” he said during a panel discussion at the conference.
However, that doesn’t negate the fact that a lot of what the new Hangouts does, is to showcase it’s wares better, and enables users with a lot of the frill and benefits that Slack users have enjoyed for a while.
Look out Slack, sleeping tigers are rousing!
Amazon’s game streaming world, Twitch rolled out a social network tool called Pulse which allows the streamers to post and engage with their community about their favourite games instead of taking the conversation outside Twitch.
For now, Pulse supports media content from Twitch, Vimeo, YouTube, Imgur and Gyfcat, but in the near future other formats will also be incorporated.
“We are working to make sharing easier and will expand our support of additional types of content in the future,” wrote Twitch in a blog post.
Pulse lets users post messages, photo and video for their friends and followers. Rings a bell? The format does sound similar to Twitter as Twitch enables content creators to run contests, conduct polls, push memes, and even ask/answer questions to keep followers hooked even while not streaming.
To say that it’s a total replica of Twitter would be wrong as characteristics like “@” mentions or hashtags are nowhere to be seen on Pulse. But this doesn’t mean that they can’t be incorporated in the near future.
Pulse is implanted into the Twitch homepage and it’s mobile app, and like Facebook, other streamers on Pulse can also comment on and react to your posts. Not resembling Facebook is the fact that, on Pulse, the posts as of now will be displayed sequentially.
Interestingly enough, Twitch has planned it all out as it has considered providing control to broadcasters regarding the moderation of comments on Pulse. If a broadcaster wants, they can also restrict the reactions to only their friends or to their subscribers only. The broadcasters also have the power to delete comments.
The company introduced Pulse in a blog post as working on the basis of an algorithm to explore content as per the user’s interests.
“Our goal is to connect viewers with the content that they’re most likely interested in”, Twitch spokesperson Sheila Raju wrote in a blog post. “Going forward, we will be working to determine the best way of surfacing posts to do just that”.
Pulse in essence, is an extension of Twitch’s Channel Feed, that was launched last year in December under beta, and is supposed to eventually roll out to everyone later this month. The fact that it is being rolled out to everyone in itself confirms the success of the feature in its beta phase.
Additionally, Twitch also rolled out IRL at the fag end of last year. IRL is designed for people to share real life experiences instead of video games.
Pulse will appear on the front page of Twitch and all the posts from Channel Feed will make their way to Pulse as well. The idea is to retain all the viewers even when their favourite content is not being streamed and also an opportunity for streamers to reach a larger number of users via their posts.
“This will allow you to not only interact with followers and viewers you regularly engage with, but also with those who might not visit your channel page as frequently”, Raju wrote in the blog post.
Just like Twitter, Pulse is also open to content from both Channel Feed or Broadcaster Dashboard and viewers or users can also post videos, pictures etc from their front page.
Pulse is Amazon’s maiden attempt into the sea of social media, however it is not something out of the blue as Amazon has been toying with Twitch’s social scope for the last few months. The only question that remains now is whether Pulse would be able to make a dent in the world of social media giants like Facebook and Twitter, or not.
Though Twitch’s user base cannot be underestimated, given that there are already 100 million monthly viewers spending 106 minutes daily watching live gaming – which is expected to rise with Pulse keeping users hooked onto Twitch and not fleeing to Facebook or Twitter to discuss their favourite games.
So, this may just be the start of a new journey for Amazon, and Twitch.
Zuri Power Bank U28: The Feature Phone That Charges Your Smartphone
Nowadays people carry a feature phone for either of four reasons – they’ve lost their primary phone, or they’re on detox mode, or it’s their fall-back phone when their primary (smart) phone dies on battery mid-day, or they can’t afford a smartphone.
Well, here’s a really good reason for anyone who carries a smartphone to carry a feature phone too – to rejuvenate their dead smartphone back to good health!
Phone manufacturers constantly and unendingly fight an uphill battle against battery life. Their increasingly-powerful products drain the battery faster than an elephant gobbles a cucumber.
And unlike ordinary phones, which you could operate for days on a single charge, these large screened mini computers unceremoniously die before the day ends. Companies thus spend millions of dollars on R&D and only marginally improve battery capabilities.
And it’s a battle that they won’t win. People are using their phone a lot more with each passing week, apps are constantly being updated to connect with more features on the device, and operating systems are becoming hungrier too. All this inadvertently costs more energy and negates whatever little progress is made on the battery’s capabilities.
The Zuri Power Bank U28, simply avoids the hassle of devoting more time to power solutions – instead giving the consumer a simple, affordable and an ingenious tool which is not only a bargain, but also a product that provides utility to the consumer.
The Power Bank U28 is a feature phone which comes with an ultra large 4,000 mAh battery and a flash charging feature that can be used as a power bank to your smartphone whenever required.
The hardware on the U28 is not bad either – a 2.8 inch display, a numeric keypad , 32 MB RAM, a 32 MB storage, VGA camera, dual SIM support, Wi-Fi, FM Radio and even GPRS for slow data networks. It also comes with a famous flashlight, a web browser and an SMS service.
Why Shouldn’t I Just Get A Power Bank?
That is a viable question which does not really require much of an answer. The U28 does not have any distinct differences from other feature phones except for the big battery and the power bank feature.
But perhaps, it might turn out to be a boon for people who carry power banks and want to get rid of the extra weight in their pockets. A dual SIM phone that charges your smartphone while you call does not sound bad, does it?
What We Can All Agree Upon.
This should be how budget phone companies should proceed if they want to subsist. Phone that pack better batteries, more features, at an affordable rate then urban and folks with crowded lives would not mind purchasing them.
But, for now, given the fact that this could become a new ‘industry’, Zuri does add a trend into the market – think about it – would it not be a bit of a miss if your friend carries a phone which charges his smartphone while you beat yourself for not buying it?
At times when mega-corps like Google fail to address issues on their own system for a sustained period of time, start-ups on shoestring budgets come to the rescue. CopperheadOS, by a two-man team based out of Toronto, is working to clean up the Android security loopholes that have been in conversation a lot lately.
The start-up has even demonstrated results where other “secure” Android phones have failed, not only raising but also addressing the security concerns with regard to what is unarguably the most used smartphone operating system on the planet.
CopperheadOS is a hardened open source operating system based on Android OS. The OS aims to integrate Grsecurity and PaX into their distribution. It also includes numerous security enhancements, including a port of OpenBSD’s malloc implementation, compiler hardening, enhanced SELinux policies, and function pointer protection in libc.
For those who do not speak the language of the computer geeks, what this simply means is that CopperheadOS is bringing to Android more privacy-centric features like alternatives to Google apps, and separate encryption and lock screen passwords.
It is, for now, available only on a select few Nexus devices.
The first challenge was to find a handset to support the OS variant – one that offered regular security updates, which is certainly not a small ask, in the world of Android!
Most companies do not ship out monthly security updates available from the AOSP.
It was thus that they zeroed in on the Nexus devices whose software, if not hardware, Google controls directly, and which receive prompt monthly security updates.
“What we’re doing is starting with the Nexus; a pretty good starting point,” Copperhead’s Daniel Micay explains. “And we’re significantly improving the security of the operating system. We’re making a lot of under-the-hood changes and exploit mitigation to make it harder to exploit the vulnerabilities that are there“.
“Copperhead is probably the most exciting thing happening in the world of Android security today“, Chris Soghoian, principal technologist with the Speech, Privacy, and Technology Project at the American Civil Liberties Union, said. “But the enigma with Copperhead is why do they even exist? Why is it that a company as large as Google and with as much money as Google and with such a respected security team—why is it there’s anything left for Copperhead to do?”
The question as to why is there still a space for Copperhead or the likes of it to exist has been raised many a time. While something like that does provide users with more security, but the question of why must Google (the world’s biggest computing company in many ways) behind Android, leave such large and obvious gaps in the security of the OS is raised.
One possible reason could be performance and ease of use. While Google Android Security team has accepted many of the security patches put forward by Copperhead into their own system, but a greater lot of them will perhaps never make their way to it, simply because of the performance trade-offs that come associated with them.
With measures like that in CopperheadOS, the enhanced experience of an average Android user is cut off. What you have then, is a limited number of features, limited apps from select third-party app stores, and other performance trade-offs that might not go too well with the large scale market.
On the other hand it’s telling that Android exists at the same space in time and with the same level of technical access, as iOS. Yet, in iOS security loopholes of such magnitude just cannot be imagined. “If I had to imagine the world where there’s a Copperhead for iOS, I don’t even know what I’d change“, said Dan Guido, CEO of Trails of Bits. “The Apple team almost always picked the more secure path to go and has found a way to overcome all these performance and user experience issues“.
The difference lies in the manner in which Apple controls its ecosystem, and in the manner which Google does not do so, in any significant or telling manner.
Google abdicated the control of the end experience of the OS, to each of the brands that manufacture Android devices, thus leaving the OS open to being manipulated and tweaked, as and however the brand wanted.
While this allowed handset manufacturers to find ways to differentiate their products, it also allowed wireless carriers to disable features that they thought would threaten their business model (with no real care of user data privacy).
CopperheadOS might serve as a proverbial band-aid on to the gaping wounds of security in the Android ecosystem for now, but whether it would actually able to patch the wound back together to a considerable extent, is only a matter of speculation at the moment.
As more security loopholes are revealed, no one can really say what Copperhead might really be in for.
As Blackberry works to revive it’s once-legendary brand, it’s going down two different paths – letting out it’s legendary security platform to other enterprises, and embracing Android firmly, for what may well be it’s (BlackBerry’s) final salvo in handset production.
It’s this second initiative that has us interested. BlackBerry’s just released another device with it’s trademark QWERTY physical keyboard that used to once upon a time be synonymous with the Blackberry brand name.
Called BlackBerry KEYone, this new smartphone seems to be a combination of a big screen device (an ode to contemporary market trends), with a physical keyboard from BlackBerry’s old-school phones.
First up, you should read our write up about the device, available here.
Second, the verdict (since most of you will be eager to get to that aspect first) – the device is a fairly solid product, really!
The physical QWERTY keypad is brilliantly made – the keys are the perfect combination of soft, and tactile. The individual keys may appear to be a bit smaller than most would like, but that’s only a initial experience, during the teething phase. Once your hands settle in on the phone, your thumbs find their exact spots fairly quickly.
The phone comes with a 4.5 inch touchscreen display which is kind of a perfect in-between size (between a 4 inch small phone, and the 5.5 inch large-screen layout of most phablets).
The device is powered by a mid-level (but quite adequate) Qualcomm Snapdragon 625 processor with 3 GB of RAM. Smartly done, the KEYone runs on the latest Android version, the 7.1 Nougat.
Clearly, the KEYone has a screen larger than older Blackberry devices, which is kind of an experiment, I think – to find the sweet spot that BlackBerry is still trying to establish for it’s current line of “hybrid” phones.
Many sites belabour the fact that Blackberry’s place in the market has been on the way down over the past several years, to a point where it almost seemed dead. We have always differed.
If there is one thing that no one can ever, ever brand BlackBerry with – is helplessness. BlackBerry never sits in a corner, wringing it’s hands, or cowers away from trying new things.
One of the grittiest brands ever, BlackBerry has astonished many, many people with it’s desire to try and reinvent itself, and even attempting pivots – finding things in it’s immense arsenal, to bootstrap it’s way back to high ground, and to keep it’s hardware business going.
With this device, and the nostalgia that Blackberry seems to be trying to invoke, the company seems to be planning to return to branding their devices as business phones.
They’re positioning this device as easy to use, comfortable to type and scroll on, blessed with good battery life – all of it with the very famous security and privacy that no other company has been able to topple.
The company didn’t shy away from emphasizing the security of the device. “At BlackBerry, we live and breathe security. Security has been engineered into the entire manufacturing process, throughout the hardware and of course the software“, said Alex Thurber, the General Manager of BlackBerry’s Mobility Solutions unit.
What is perhaps noteworthy is that the Blackberry KEYone, even though under the Blackberry brand name, is not a device that has been designed or produced in-house by the Canadian company. Back in December, the company had announced that they were halting all in-house smartphone production. Subsequently they signed a deal with the Chinese electronics brand TCL, giving them the rights to produce devices under the BlackBerry brand.
As per the deal, Blackberry will stay in control of the security on the devices, as well as the software, while TCL will produce Android-run devices. The KEYone is thus, the first BlackBerry device that this combination has brought to the market.
“The new BlackBerry portfolio has a chance of success because few companies now offer BlackBerry-style design and features, and the productivity-focused smartphone segment is underserved“, said Ian Fogg, Head of Mobile at research firm IHS.
The phone certainly makes it feel like BlackBerry is back, and all set for the competition!
Budding Author? Amazon's Simplified It's Self-Publishing Process For Your Next Paperbacks
Over the last few years, Amazon’s Kindle Direct Publishing program has almost single-handedly made self-publishing ebooks a mainstream thing.
This program (and the relatively cheap Kindle devices) has enabled writers to reach markets that they would never have reached before.
Actually, the capability to self-publishing a paperback is not something that is new to Amazon’s KDP program; it’s been around for a while now. The process to do so however, has been quite messed up and clunky, with hiccups strewn across various parts of the design program.
Authors had up until now, been forced to use a different design program for both the formats of publishing (digital and print). So if the author wished to get printed copies of their already published e-books, they would have to compile the entire thing again in a different format, with different tools.
Amazon has now decided to move to a single point of entry for both formats, using an updated version of Amazon’s CreateSpace system.
Authors no longer need to spend any extra money up front for print runs, as the KDP will debit the printing fees from resulting book sales. While the 60% royalty passed on to the author for each book sold, is approximately 10% lower than the highest ebook royalty, but it is many folds higher than the royalty offered to most authors by publishing houses.
There’s another saving – authors will no longer need to worry about running out of copies since the books will be printed on demand (and undoubtedly be shipped expeditiously thanks to Amazon’s mind-bendingly big logistics infrastructure).
While the feature set of the older version of CreateSpace was much more robust, including physical proofs and author copies, the new beta KDP process will eventually catch on, quite certainly.
There are many reasons for that, primary being that this new feature will not only help authors, and Amazon, reach customers who prefer their books on paper, it also has the potential to streamline a lot about the publishing industry in general.
The power of publishing a book and bringing it to a market, has mostly resided with the publication houses till now – simply because they command and control the army of resources necessary to get a book to it’s audience across continents and oceans.
This painless integration is a boon, in many ways, and we’d only get to see it’s power in full glory as more authors turn to using this facility to bring out their stories in Print.
While the market of ebooks has been growing exponentially, many of us admit that we still prefer the more traditional format of paperbacks. So, Print has shown an incredible resilience to the digital onslaught, and with this self-publishing capability gaining more muscle, it will be interesting to see how many new authors find their ways into reader’s hands.
Are you one of those budding authors waiting to express yourself? Don’t wait any longer – type away!
Watson, IBM’s cognitive computing platform, is now all set to fight against… cyber crime!
It will be the first step of the Information Technology industry using Augmented Intelligence technology to power ‘cognitive’ security operations.
Some of you may remember Watson as the machine that won the million dollars jackpot in the famous American TV reality show, Jeopardy.
For those of you who aren’t aware of Watson’s genes, it is a machine learning product that uses “natural language processing”, and statistical analysis to achieve “reasoning”.
With it’s immense abilities proven in labs and test environments, IBM has over the years, been pushing Watson into a who melange of difficult territories – tackling real world problems such as cancer diagnosis/treatment, traffic accident prevention, even tax reporting! Gratifyingly, Watson’s shone through each time.
Over the past year, Watson has been under training – learning the languages of cyber security, ingesting over 1 million security documents! This knowledge empowers Watson to now help security analysts parse thousands of natural language research reports, really quickly, drawing inferences and interpretations that have never before been accessible to modern security tools.
According to IBM research, security teams sift through more than 2,00,000 security events per day on an average, leading to over 20,000 hours per year invested chasing false positives (you should read our previous article to understand what false positives are).
Therefore, the introduction of real-time, accurate, reliable and fast cognitive technologies into the security operations centres is going to perhaps be the wildcard that the security teams need critically, to keep up with ever-evolving Bad Guys.
So you know, the security community anticipates the doubling of security incidents in just the next five years. Obviously, this calls for increased observation, regulation and interventions everywhere in the world, including India – the hotbed of technology and development.
“Information security assumes paramount importance around the schemes such as Make in India, Digital India and Skill India.
The rapid digitisation of the country is bound to throw up additional challenges hitherto unseen and will need solutions which can analyse, detect, predict and prevent security threats including emerging ones.
The ‘Make in India’ initiative is driving global organisations to set up manufacturing bases in the country and while they bring in global best practices, the IP [Intellectual Property] brought in and the IP to be generated within country needs to have best of the class protection.
Cyber Security skill, which is part of the third initiative, Skill India, needs to have augmented high end skills complementing the workforce coming out of this scheme for better threat protection.
This calls for the paradigm shift in Cybersecurity, i.e. Cognitive Security.
IBM Watson for Cybersecurity is that ‘shift’ that will bring together advanced cognitive technologies with security operations and provide the ability to respond to threats across endpoints, networks, users and cloud,” said Vaidyanathan Iyer, IBM Security, India.
According to IBM, Watson for Cyber Security will be integrated into IBM’s new Cognitive SOC platform, bringing together advanced cognitive technologies with security operations and providing the ability to respond to threats across the various “nodes” – endpoints, networks, users and cloud.
The centrepiece of this platform is IBM Q-Radar Advisor with Watson, the first tool that taps into Watson’s corpus of cybersecurity insights. This new app is already being used globally by customers to augment security analysts’ investigations into security incidents.
IBM has also invested in deep research to bring cognitive tools into its global network, including a Watson-powered chatbot currently being used to interact with IBM-managed security services customers.
Besides, there is also a new research project code-named Havyn that is pioneering a voice-powered security assistant that leverages Watson Conversation Technology to respond to verbal commands and natural language from security analysts.
As Google Maps introduces a new feature that enables you to create and share with others, a list of your favourite locations, we wonder if this a turning point (no pun intended) in Google Maps’ raison d’être (reason for existence).
This new feature is of course, a nice way to organise all your favourite restaurants, bars, museums, and coffee shops in one place. But, what makes it even more interesting is that you can also now follow your friend’s Google Maps lists, or send her your’s via text, email, messaging apps, or social media.
Aimed at making the mapping application more involved with your daily life (and not only for commutes), this makes the Maps application something you’d turn to more often (there we go again – no pun intended, again).
This change though small in technical terms, does provide Google new and potentially very lucrative avenue to know even more about you. But an important question arises: Is Google Maps trying to mimic a social network?
Waze was the first, and perhaps last well-known navigation service with a somewhat-social angle built into it, thanks to it’s crowdsourcing bedrock. Guess where Waze is now?
In Google. Entrenched deep within Google Maps.
Now, Google seems to be melting what it’s got from Waze and observed from Foursquare (later called Swarm) and creating a new work stream.
Google said that this new option will give users an easy way to share information, places, etc. with their friends. “Previously, people could ‘star’ places on Google Maps, but there was no way to organize that information or share those places with people“, Google spokeswoman Elizabeth Davidoff wrote in an email.
It is the ‘share’ part of this new roll out that reflects quite heavily in Google’s approach.
Google has been operating under the assumption for a while now that people want to use Maps for several reasons other than simply finding their way in an unknown area. This also reflects in the other recent features that they have added to Google Maps, including the ability to hail a cab ride from Uber or Lyft, or things like finding out if parking is an issue in an area you’re intending to visit.
This new move also raises an important question with respect to how people use Google Maps.
Google has done a great job by mapping pretty much the entire the world through labour-intensive efforts like Street View and crowdsourcing real-time traffic updates with Waze.
In addition they added popular locations, businesses, ratings, comments, photos, and other details that previously were exclusive to sites like Yelp.
This new move will not only allow users to personalise the data available on Google, but also to share this personalised data.
Things like finding out about traffic, or nearby restaurants, bars, petrol pumps, banks, etc., have been widely used Google Map features for a while now. They are features that are quite helpful, and user-friendly, but all of them follow a user-service approach, working towards finding information pertaining to the user’s need. This new feature, enabling people to share their Google Map lists, marks a change in this particular approach, not just helping users find what they need, but also enabling them to “share” in with others… Facebook much, Google?!
As we mention Facebook, we must not ignore another important angle to Google’s new move.
As is almost always the case, the trade-off with convenience is increased data collection. This new feature is quite obviously another way to collect information about users’ preferences and interests. Information of the kind, like where people went or where they want to go may, can be really valuable to advertisers, and it could easily be used to target customers more effectively.
So, Google Maps might be trying to mimic a social network, but with the ‘desired, friendly, and the popular’, also come concerning controversial features.
The new feature is available on Android and iOS starting this week.
There’s no question about it. Wearables are poised to become the next big thing in the consumer-tech industry. But it’s not because they’re a new must-have breed of gadgets that people are yearning for whimsically. There’s more to it than idle hankering.
In fact, there are plenty of good reasons, most of which aren’t yet expressly known even to the yearners.
Human life is changing. Caught in a constant flux, people are always on the go. And no, it doesn’t have to do with vocational pressure or the desire for material gain. Its got to do with being the target (or recipient) of a constant, unending stream of updates, notifications, alerts, calls and email dings. Consequently, we’ve all got a new perpetual appendage. Our smartphone(s), and every single person in the modern world is suffering fatigue from it.
Fatigue of a nature that’s never been seen, felt or even estimated earlier.
People are already suffering notification-fatigue, with countless apps, social networks and emails constantly bubbling through the day – even day seems like a constant stream née barrage.
Phones don’t leave hands, and if they do, its only because they’re sucking in more juice because the battery’s running out, not because we decided to put it away voluntarily.
Many suffer mental fatigue. There’s always so much going on, that there’s a dullness in the mind. Constantly. Even at 10 am.
The thing is (and most people don’t realise this intuitively) – a notification is not as innocuous as it sounds. It’s actually the sound of the opening of a vortex.
Picture this: You have a vacant half hour in an otherwise busy day. You plan to grab a bite before the next meeting. You hear a ding, you drop the sandwich, grab the phone, check the notification, then the next one, and then remember you had to text someone. You do that and then there’s the mailbox you want to peek at in case you got something new. Nothing new? Well, looking at the unending list in the mailbox, you remember a mail you wanted to action – suddenly you’re pecking away a response. Then you realise that the reason you hadn’t responded earlier was because you needed to check a factoid with a coworker before penning the response – so a quick call to the coworker, back to the email. As you do that, someone WhatsApp’s you, you shoot off a quick emoticon.
Look at the watch, 42 minutes gone. You’re now late for the next appointment.
The sandwich lost its place in your day. And you’re going to have a rumbling tummy that speaks out exactly as you enter the meeting room and commence your apology speech.
Here’s another challenge – recall, immediately, didn’t you just check your phone to see how much battery you had left on it? Be honest – a minute ago, ten? Bet I’m right. We’re so paranoid now, that we’d be cut off if the phone dies, we’re on tenterhooks all the time… subconsciously waiting for the phone to buzz (just so that we know its alive and well) that even silence unsettles us!
Its a crazy world to live in. And its not going to get any easier.
Yet, there’s something we can do about it. Something that’s a little weird at first mention. But bear with me…
Much as I painted a forlorn picture about devices, the solution I’m about to recommend is actually going to be a more of the same!
Come a day that you have the money, get a wearable – for the 2-3 top activities you do on the phone. Let me explain.
If you want to get on the exercise bandwagon (to watch your weight, to pump your arteries, or simply because you like being limber), get a fitness band and leave your phone at home as you exercise.
If you want music when you walk, get an iPod.
If you want to know about your Facebook feed, or know when an email comes in, or just to stay aware of whats going on, on your phone, get a smart watch (something nominal will do too).
Why? Because without really knowing it, you’re getting a little tired of carrying your phone(s) around everywhere, or holding it constantly, in order to monitor it. You need a break, and a well-equipped wearable is going to help.
It’ll monitor what it needs to, apprise you as needed, and do only a few things, but do all of them discretely. And, it’ll only notify. Which means it’ll grant you an option – to register the cause of the alert and either just tap it away or run a quick acknowledgement to the sender/app and continue enjoy what you were doing. It’ll simplify your day, and handle some of the mundane things that you needn’t worry about just yet. And maybe, just maybe the vortex will close down for a bit, till you’re ready to be sucked in again.
Important Disclaimer: While there are a million manufacturers making all sorts of wearables these days, most wearables are still at version 1.0 of their evolution. So if you’re smart you should buy something basis functionality, not price, nor brand and definitely not basis colourful ads. Go easy on the pocket right now, and get the crackerjack version a year or two later. By that time kinks would’ve been ironed out, and you won’t need to buy disparate hardware for different tasks/purposes. Wearables, like all other equipment before them, will reach their zenith in future evolutions, and will amalgamate such that only the fittest will survive. Wait out for Mr. Darwin’s theory to strike the usual death knell. You’ll be the richer for the savings.
Chip-Monks has been researching wearables for a while now, and we’ve collated a great list here. Head over, check it out and get something that meets your needs.
But as we sign off, here’s some more sage advise from the ‘Monks – Get off your phone! Look up! There’s a whole world out there – with birds, and flowers and the setting sun, people and smiles, an elder who needs help crossing the street, a huggably cute puppy, perhaps a new dress in a shop window and (as in my case) a child whose chattering away to you, believing you’re listening to every word.
Listen. Enjoy. Live a little.
Get off your phone, get a wearable, ‘cos you aren’t getting today back.
Social Media. Memes. Pizza.
An average human being who has a majority of his life governed by Internet, has a lot to do with these three words/things. While the latter two being food to Mind and Stomach, respectively, the former has a lot to do with a person’s social life.
In fact, most users’ entire social lives are dictated and judged by the number of followers (or fan-base) they have on social platforms. Such is the condition now, that it’s bordering on narcissism.
Of the most famous social media platforms that rule peoples’ universe today – Facebook, Twitter, Instagram and Tinder – each of them, barring the last one, are not just a stage to connect with people and share a laugh or two; they also serve as stages that host debates, discussion, along with stupidity, mocking and even bullying on the flip side.
Thanks to the egregious reach that these platforms have, one’s opinion can be voiced to the sea of a million people with one mere click. Unfortunately, these same social networks have their dark side too.
Pitch dark, to be honest.
Cyber Bullying Statistics from 2016 reveal that more than 33% of the teens on social networks have experienced cyber bullying.
Apparently, there has been a sudden inflation in Memes (an image or clip humorous in nature) – now there are Memes about Memes! (Looks like we need to discuss Unemployment Stats).
With almost everyone having an opinion, basis the new habit of headline-skimming, these platforms provide a huge Soapbox for the ill-informed and the downright ignorant folks, to say things they don’t know much about. So much so, that despite the reach of technology, apps and platforms to share views, quality debates and discussions are hard to find now.
On this note, some would find Twitter to be the best platform around – as it provides only 180 characters to write a status. So one expects a far lesser invective enforced by the mandated brevity. Sadly, even this has not curbed the online abuse and hate speech on Twitter.
Following much scrutiny over increasing hate speech content on Social Media, companies have made steps to curb the growth.
In November last year, (with U.S. Presidential elections around the corner), Twitter provided it’s users, the ability to hide content they didn’t wish to see and to report abusive or hate-speech related content.
A substantial increase in reports of racist and derogatory content against minorities was seen by the end of last year, however Twitter has been taking important steps to prevent the fostering of such content.
Del Harvey, Twitter’s Vice President of Trust and Security said, “There’s a fine line between free expression and abuse, and this launch is another step on the path toward getting rid of abuse”.
Earlier this month, Twitter made another announcement to clamp down on hate speech and abuse against its users. Twitter also made it clear that they would create a feature that allows users to search more safely, by blocking tweets with potentially sensitive words, terms or phrases.
The tweets would still appear but would not be generally found. Steps would be taken to narrow down user’s search, by toughening the language around what’s considered unacceptable and hateful.
Twitter, in its statement also said that it has started identifying people who have been previously banned for abusive content and will stop them creating new accounts.
Other steps include collapsing irrelevant and low quality tweets, pertaining to a particular matter. Twitter, in its statement said, “Our team has also been working on identifying and collapsing potentially abusive and low-quality replies so the most relevant conversations are brought forward”.
However, all these steps would only help curb abusive content but would still not prevent people from posting it. With anonymity not being in the picture for Twitter, potentially abusive content is able to reach a large audience based on the number of followers.
But, Twitter has a history of banning accounts for propagating hatred, racism, xenophobia and other derogatory content via its platform.
Exhibit A, being a celebrity. Milo Yiannopoulos, Editor of Breitbart News, a right-wing news network, was banned from Twitter last July for “participating in or inciting targeted abuse of individuals”.
Twitter’s tools to curb socially unacceptable content is commendable. It has paved way for other Social Media platforms to take similar steps in making social network channels, less evil a place.
But, for as long as people don’t use their own in-built filters – brains and intellect, we don’t see any less of a pollution in these platforms.
Don't Hate Google For This - It's Going To Count Calories From Your Photos Soon
In between uploading photos of your steaks and kababs to Instagram and waiting for the Likes to come streaming in, you could also be smacked in the face with a sobering calorie count, courtesy Google!
The company unveiled plans for a new app called Im2Calories at a tech conference in Boston last week.
Well, this could actually be more revolutionary than its autonomous car technology (Kidding! We love you, Google!).
One of the app’s goals is to make calorie tracking easier. Instead of jotting food down in a journal, or typing and using a separate app, Im2Calories piggybacks off something you might already do – snapping and sharing pics of your plate.
Im2Calories will rely on image-processing technology that can identify and recognise the food in your photos, and by analysing the pixels, the app will estimate how many calories you’re about to spear on your fork. It bases the information on publicly available nutrition labels.
The app isn’t designed to be perfect, but it will get better over time as more people start using it, said Google research scientist Kevin Murphy, according to a Popular Science report.
That’s because Im2Calories is an artificial intelligence and machine learning tool at heart. With more data, the app will learn to distinguish blueberry pancakes from chocolate chip pancakes, and if it’s wrong, Google will give you a way to change the name of what’s tagged.
This isn’t going to lead to a practical product in the short term. Google only just filed for a patent on Im2Calories’ underlying technology and has no immediate release plans, so you can post dessert photos to Instagram with relatively little guilt.
A spokesperson told CNET the technology behind Im2Calories is still in research and development. “No actual product plans at this stage“, he added.
Eventually, though, it could be a staple feature of health apps that help you balance your food habits with your activity levels. And the potential doesn’t stop there, either. While food is the “killer app”, the image recognition code could also apply to traffic prediction and anything else where a series of photos can provide a wealth of data.
“We semi-automate“, Murphy said.
Like Facebook (FB, Tech30) and other big tech companies, Google has been focused on image-processing technology lately. Its new Photos app, for example, automatically groups and strings your pictures together into albums without you having to do a thing. RealNetworks (RNWK) recently launched a comeback with a similar app called RealTimes, which automatically creates video slideshows.
Google, has already filed a patent for the capability, so expect things to starting popping up with more and more AI built in, to demystify, tag, club and process your photos with even more intuition.
The VR platform everyone had been waiting for is finally open to all developers. As of this week, any developer can make an app for Google Daydream.
Google had been arguably a little late to the VR party, but when it did arrive, it did not bring a device – it brought an entire ecosystem instead, something from where devices can be powered.
Google Daydream has been the center of curiosity of the many, ever since. Here’s a quick read, that should explain what Google Daydream is about.
The platform was announced back in May 2016 and has been live for a few months now but was only open to apps by a select few developers. For starters, the company was still in the process of testing the platform. Secondly, to ensure that the users appreciate the experience, Google had been quite particular about maintaining a quality standard for apps that were allowed to be available on the platform.
Limiting the number of developers they worked with allowed Google to work in close collaboration with partners and thus carefully curate, and manage, content for their new platform. But it of course also had a downside. It severely limited the number of apps one could download for the new headset ecosystem through the Google Play store. So while the experience was supposedly good, the variety was quite limited. Now that the platform is open to all developers, that is bound to change.
It would be interesting to see what this new move brings to the platform.
It would be interesting to see what this new move brings to the platform. While there are obviously many skilled developers waiting who have been waiting for the platform to open so that they can present their apps.
Similarly, of course, there also are many low-skilled developers who might end up pushing incomplete and low-quality apps onto the platform. Filtering then, in this new environment, would be an interesting task. Apps can be submitted through the Play Store, much like any other Android app is submitted at the moment.
Google, however, is still being very particular about the quality standard on its platform. The company has published a set of requirements for apps that can be published onto the VR platform. All developers must follow these requirements while submitting the apps, and the company expects to hold the standards high. These requirements include certain unique assets, such as 360-degree photosphere, a VR icon, and Motion Intensity Ratings.
This move has the capacity to dramatically shift the momentum in mobile VR. Especially given that the company is competing with the likes of Samsung Gear VR, which recently announced that they sold about 5 million VR headsets and Facebook’s Oculus.
Both of these have already made their way into global markets, and everyone has been wondering where Google’s Daydream is headed.
The apps on Daydream have been showing comparatively smaller download numbers. With more apps which can be expected to be available now, this can be expected to change, as users would have more options to choose from.
Google also announced Daydream View, a Daydream-compatible VR headset that was designed by Google, in October last year. The VR headset, Daydream View, for now has a limited availability, within the US. It is available at Verizon, Best Buy and the Google Store in the US.
As the ecosystem grows, more and more headsets and phones can be expected to be compatible with it. Even though that sounds like a given thing, the catch is that curation in VR is quite tricky, especially given the fact that design problems and slight glitches don’t just mean a broken app on your phone, these could leave the users feeling ill, and with severe nausea, headache, and the likes. It is therefore important to get it just right, even if it means stalling the process a wee bit longer.
The move to open to platform to all developers is not really an unexpected or surprising move; it was a logical step that everyone had been waiting for. Google had previously indicated that it would be opening the platform to all developers in 2017. The move, nonetheless, is quite welcome.
Amongst the things back from the dead this week is the secure email service Lavabit, infamous for having been used by the white-hatted whistle-blower Edward Snowden.
Even though a number of email and messenger platforms make the claim of encryption today, Lavabit was one of the first, and mostly unimpeachable services to be proven secure. All this and Snowden’s own citation (of sorts) has helped Lavabit build its own (very significant) credibility.
Lavabit founder Ladar Levison, on January 20th, announced that the site is back online, with more layers of security. However, for now, it is only available to those who used it back in the day. No new users are being registered for the time being.
The service addresses what has become the point of major tussle between tech companies and governments across the world – the latter’s constant demand for “backdoor access” to user data from various platforms of communication such as email or messenger systems.
Back in 2013, when Snowden had leaked sensitive details of the NSA’s mass surveillance program PRISM, he had used Lavabit’s services. Later that year, the service shut itself down.
Lavabit had 410,000 user accounts at the time. It eventually came to light that the government had asked the owner to turn in the site’s encryption keys, and the owner chose to shut the site down instead of complying.
Four years later, the site is back up again.
“In August 2013, I was forced to make a difficult decision: violate the rights of the American people and my global customers or shut down. I chose Freedom. Much has changed since my decision, but unfortunately much has not in our post-Snowden world. Email continues to be the heart of our cyber-identities, but as evidenced by recent jaw-dropping headlines [in the U.S. Presidential elections, and the Clinton email leaks] it remains insecure, unreliable, and easily readable by an attacker,” stated Levison in the note announcing the relaunch of the service.
The SSL key which would allow for vulnerable information to be decrypted, Levison said, was the biggest threat they faced back then. At the relaunch now, the service comes back with a new architecture that fixes the SSL problem and includes other privacy-enhancing features as well, such as one that obscures the metadata on emails to prevent government agencies like the NSA and FBI from being able to find out with whom Lavabit users communicate, and what they say.
Amongst these new security features, Levison also announced that the release of the 2014 Kickstarter project to the public. Under this project, Dark Internet Mail Environment (DIME) and an email server which is named Magma, provide to the users an open sourced communication platform with end-to-end encryption.
DIME even provides the users three levels of privacy: Trustful, Cautious, and Paranoid!
While with the first, the users place more of their trust in the server, the second provides users with an experience comparable to that of email, but at the same time minimizing the trust placed in the server. As the name suggests, the Paranoid mode goes overboard. It minimizes the amount of trust a user is required to place in their server, at the expense of functionality. This mode also does not support webmail access or allows users access their account from multiple devices without an external method for synchronizing their key ring.
Levison went on to assure users that regardless of the complex nature of the encryption system involved, the service is “flexible enough to allow users to continue using their email without a Ph.D. in cryptology”
The timings of the re-launch of the service, of course, is curious, especially owing to the finger Levison seems to be pointing in his announcement letter. “Today is Inauguration Day in the United States, the day we enact one of our most sacred democratic traditions, the peaceful transition of power. Regardless of one’s political disposition, today we acknowledge our shared values of Freedom, Justice, and Liberty as secured by our Constitution. This is the reason why I’ve chosen today to relaunch Lavabit”, Levison stated.
The U.S. system allows for the President to use a massive and powerful surveillance apparatus in the form of the various U.S. intelligence services, which can operate “under the radar”, and with virtually no oversight.
This, of course, is not a concern restricted to the U.S. The U.K. also recently granted its police and intelligence agencies unprecedented power, enabling them to monitor citizen in alarming ways. The tales of the Russian surveillance, especially in the Soviet region, need no introduction, and neither to that of the Chinese political apparatus’ control over every kind of media.
A system like Lavabit is designed to by-pass exactly these kinds of surveillance, and security oversights, ensuring that users’ data is only available to the eyes of those intended. While it does come back in an era when the users have more options, like WhatsApp and Facebook Messenger, both of which have end-to-end encryption, but the credibility of a service like Lavabit, which has already paid a huge price for the privacy of its users, is not comparable to anything else in the market right now.
Former users of the service can now access their accounts, and migrate them to these new protocols of security. The site will also be open to new users in the near future, and for that one can preregister on a discount right now. Normally the services cost USD 30 for 5 GB of storage or USD 60 for 20 GB of storage.
In the face of growing criticism of the increasing fake news on Facebook newsfeeds, the social media giant is about test it’s new Fake News Filtering tools in Germany.
This comes just in time for the German Federal elections that are scheduled to take place over the next few months.
With this new system that will be implemented in the next couple of weeks, Facebook’s users in Germany will be able to report a story as a fake. Once a story is so reported, it will then be sent to Correctiv, an independent Berlin-based fact checking organization which will examine the story.
If the story is then found unreliable, it will be flagged as “disputed” on the social media platform, meaning that while people will be able to share it, but the story will come with a warning.
Another step that might help counter the fake news epidemic is that once a story is flagged as “disputed”, it will not be prioritized by Facebook’s news feed algorithm.
The Newsfeed algorithm is what works to allow any story to trend on the social media platform, garnering it more eyeballs, and thus more credibility.
This is imperative, because lets face it, in this socially-enabled world, if a lie is repeated long enough, the amount of echo that it creates on the worldwide chatter space of social media offers is sufficient to confuse most people between the concoction and the truth. Quite easily, in fact.
This move also comes in the face of the latest proposition by the German government that proposed a law that would levy a €500,000 ($523,320) fine for each single piece of misinformation published and not removed by the network within 24 hours.
This proposition came after the investigation of the Berlin Truck Attack from last month was widely talked about online, with mostly fake news being propagated, leading almost to a situation of chaos within Berlin.
It’s safe to say, Germany had had enough of Facebook, and the impact of fake news.
Thankfully, it’s not only Germany that will gain the power of the truth.
Facebook also started a similar approach in the United States of America, after the new villain of Fake News on Facebook was uncovered and highly criticized during the last U.S. elections, which helped Trump ascend to the Presidency.
Fake News on social media platforms was believed to have swayed the vote quite heavily, and to have created more like a scene of chaos within the country, post the election, where the website saw a huge deal of anger and frustration directed at it.
Facebook then partnered with bona fide news organisations in the U.S. like ABC News and the Associated Press, plus fact-checking groups Politifact, Snopes and FactCheck, to verify controversial stories.
In Germany, the cause for concern is an upcoming election, as much as the social order of the country. In the face of the Syrian refugee crisis, and the everyday toughening politics of the space, Germany has been the hub for the influx of refugees fleeing the terror of the attacked spaces. With the Berlin Truck Attack of last month, and the fake news surrounding it, not only did it point fingers to refugees, it also pointed fingers at the policies of Angela Merkel, who had strongly supported the influx of refugees into Germany, last year.
In an interview with the Guardian, Steffen Seibert, a government spokesman, said the authorities were dealing with a phenomenon of a dimension “not seen before” in Germany.
The fear then is that fake rumors concerning migrants and refugees might spur the rise of populist parties, stirring hate against foreigners, in addition to perhaps leading to an election that might be spurred by bias, and by misinformation.
Facebook’s COO, Sheryl Sandberg visited Berlin this past week to meet with government officials in the same regard. In an interview with Germany’s top-selling daily paper Bild, she emphasized that Facebook can’t possibly single-handedly deal with the epidemic of fake news, stating how important the reliance on third parties for the checking of the information is for them.
“We don’t want to decide what the truth is, and I don’t believe anyone wants us to do that“, said Sandberg. “When we say that we can’t take it on ourselves, that doesn’t mean that we don’t want to take any responsibility. We do take responsibility“.
Facebook seems to have taken the first step to try to curb the epidemic. The success of the step can only be measured over time, so we shall keep an eye out for this one.
In the meantime, don’t believe every headline you read on your Facebook newsfeed, open those articles, read through them, weigh their authenticity based on the source they come from, and only then share the information.
Curbing the epidemic is still a long way to go, and we all need to take our few step to fight against the spread of misinformation. As we’d written earlier, we are as much to blame in propagating it, albeit innocently.
As an old mentor of mine used to say – “In order to not land up in a spot that we’re going to be surprised, check you premise before you believe something“. I learnt form this, you should too – don’t believe everything you read.
Take the time to check your premise before you make up your mind!
Snapdragon 835, The Processor for 2017 Flagships
Towards the end of last year, Qualcomm announced that it’s next prime jewel was going to be the Snapdragon 835. It also said that it’s new, next-generation smartphone processor chip would be built using 10nm FinFET process node in collaboration with Samsung. But that’s where the details halted and nothing else was made known to the world.
That announcement had it’s effect – curiosity and conjecture ensued, mulling the whats and hows of the Snapdragon 835 processor.
Then, at the recently concluded CES 2017 event, amid the unveiling of a lot of new cool gadgets, Qualcomm announced more details about the Snapdragon 835.
The company dwelled on some details regarding the forthcoming chip, shedding light on the clock speeds, core designs and upgrades on top of Snapdragon 820.
The official word is that the Snapdragon 835 will feature the Kryo 280 CPU with four performance cores running at up to 2.45 GHz and four efficiency cores running up to 1.9 GHz. The newest chip will feature LPDRR4X (a type of LPDDR4 developed by Samsung that uses 0.6V for I/O voltage (Vddq) instead of the standard 1.1V.
“The combination of the CPU, GPU, DSP and software framework support in the Snapdragon 835 offers a highly-capable heterogeneous compute platform,” the company said in a release.
In terms of connectivity, the Snapdragon 835 processor comes with “an integrated X16 Gigabit-Class LTE modem, with integrated 2×2 802.11ac Wave-2 and 802.11ad Multi-gigabit Wi-Fi, making it the first commercial processor equipped to deliver Gigabit-Class connectivity at home and on the go,” Qualcomm said.
To provide you with some concrete figures, the processor is claimed to offer 20% performance gain, and 25% faster graphics rendering.
The hardware-based user authentication on this new chip makes the smartphone eligible for uses like enterprise access, users’ personal data and mobile payments which are the need of the hour, all thanks to Demonetisation (in India).
The Snapdragon 835 is clearly aimed at supporting next-generation entertainment experiences and connected Cloud services for premium consumer and enterprise devices, including smartphones, VR/AR head-mounted displays, IP cameras, tablets, mobile PCs and other devices running a variety of operating systems, smartphones, VR/AR head-mounted displays, IP cameras, tablets, mobile PCs and other devices running a variety of operating systems, including Android and Windows 10, with support for legacy Win32 apps.
“Our new flagship Snapdragon processor is designed to meet the demanding requirements of mobile virtual reality and ubiquitous connectivity while supporting a variety of thin and light mobile designs,” said Cristiano Amon, Executive Vice President, Qualcomm Technologies, Inc., in a statement.
The Snapdragon 835 also incorporates the new Adreno 540 GPU and Qualcomm Spectra 180 image sensor processor (ISP), taking your travel photography game to the next level with its amazing camera capabilities. This latest Qualcomm flagship processor can support up to 32-megapixel single and 16-megapixel dual-camera setups.
What’s more, the Snapdragon 835 is packed with Quick Charge 4 that can fill juice in the device for up to 20% faster charging and up to 30% higher efficiency than Quick Charge 3.0.
This clearly implies that the users can play more games and watch movies for a longer period of time on their smartphone without worrying about the battery. Additionally, there is word that the mobile platform has been shrunk and is 35% smaller in package size, thereby consuming 25% less power in comparison to its predecessor, which means longer battery life and thinner designs.
The improved performance on the newest chip is owed to the gains in the clock speed and not really any crucial micro-architectural changes. Most of us are aware of the fact that smartphones rarely run at their top frequencies for any length of time due to aggressive power management. If the new 10 nm chip is able to hold higher clock speeds than its 14nm predecessor, then it can definitely lead to better results in terms of performance.
This kind of rise fails to be prominent at times, simply because thermal and power envelopes limit their applicability to specific applications or workloads, and because efficiency improvements have naturally diminishing returns.
In addition to providing details about their latest processor, Qualcomm, in partnership with ODG, also launched the first devices powered by the Snapdragon 835 processor – ODG R-8 and R-9 AR/ VR smartglasses.
The processor is expected to be shipped in commercial devices in the first half of 2017, in fact some rumours suggest that Samsung Galaxy S8 might pack in this latest Qualcomm Snapdragon 835 processor.
I know that was a lot of technalese back there, but it’s hard to talk English when you one’s listing specs! Check back at chip-monks.com to know more as 2017 unfolds, and devices bearing this new benchmark of processing power, release to the world at large.
Technology is always initially heralded and celebrated for it’s existence, but soon thereafter, it starts being judged on how close it is to reality. No matter how hard Technology seeks to create new paradigms, human expectations eventually push it to mimic something we can relate it to.
A screen (like that of a television) as much a revelation as it was, it had to show visuals first, then begin the long journey of mimicking real-world clarity that our eyes were naturally blessed with.
The abacus could calculate, but the calculator became a progressive step only when it calculated at a speed comparable to our brain.
Same benchmarks, in turn apply for touchscreens too. They’re direly in the need of an upgrade.
First came the simple capacity to sense the touch and motion of a finger on a screen using a sensor. Later it was the turn of the sensor to gain more sensitivity towards the touch. Yet, an extremely sensitive touch become irritating.
So, touch itself was reduced to a function of the glass it was built upon. But therein lies the rub.
Touchscreens are uniformly flat. There is almost no difference in touch between the picture of a brick wall or a soft cloth. And while processors in a Tablet allow you to create songs and music, however you feel no joy in playing the instrument. One can digitally strum guitar strings on a mobile app, but cannot feel the grainy texture of the string’s tautness.
No matter what you do on a gadget, the experience is bereft of well, experience.
Consequently, a lot of people are starting to feel that progress in touchscreen technology is still absent. Pixel density seems to be the only barometer of a screen development for manufacturers; and that’s not feeling like it’s enough.
Enter TanvasTouch technology.
Brainchild of researchers at the Neuroscience and Robotics lab at Northwestern University, Tanvas is a result of a decade worth of intensive research and perspiration.
This technology allows the screen to mimic the texture of the surface it is portraying!
To achieve the effect of tactility, it employs what Tanvas calls “real-time control of the electrical forces between your fingertip and the touch surfaces”.
That’s a lot of jargon, but put simply, TanvasTouch – a layer between a device’s touchscreen and your fingers – acts like an electromagnet for skin, physically ‘pulling’ at the tips of your fingers as they move across the screen by using surface-level haptics.
The result is a palpable, dynamic “sense” of touch that pressure+vibration based feedback like Apple’s 3D Touch does not come close to replicating.
“Touchscreens are more integrated into our lives than ever, and yet we are still tapping away at lifeless glass”, Tanvas CEO Greg Topel said in a press release. “TanvasTouch adds a new dimension of interaction”.
The technology was demonstrated as a prototype at the consumer Electronics Show in Las Vegas.
One compatible app, featuring a draggable coat zipper, produced a sensation akin to tingling as the animated zipper moved up and down its digital teeth.
Another app served up a gallery of different textures (“grainy,” “choppy,” “fine,” and “wavy”).
Yet another app, a virtual guitar, produced a tangible twang each time a finger strummed across the strings!
There’s no other way to say this, but the effect of it was uncannily like the real world experience of the act. And it felt so good to feel the stuff, than just tap at it!
Adaptable to virtually any screen out there in the market, the company sees its future in retail.
It has recruited apparel company Bonobos to develop apps that let customers feel pants and shirts before they buy them – a mock-up app on display showed two fabric textures, one cotton and one corduroy.
And it’s brought on NTN Buzztime, the manufacturer behind many of the tablets in restaurants and airports, on board to engineer new experiences that take advantage of the tech.
Given its advancement, the technology obviously has a potential for the visually impaired. the company has retained the service of a Dr. Patrick Degenaar, a reader in Neuroprosthetics at Newcastle University, who will be studying the technology’s applicability towards the creation of instant braille content.
The prototype that was showcased in Vegas didn’t yet have the precise features to be termed as satisfactory, however as discussed above, one day’s technology is merely a step in a year-full of progress.
And once this tech becomes common place, looking back we’ll once again wonder how and why we suffered the impersonal and ultra-bland flat touchscreens for so long!
Imagine being able to accurately see how furniture will fit in your home, how exactly clothes fit on your body without even wearing them, and imagine being able to play immersive games with your current environment as the map.
That is the promise of Augmented Reality (AR), and the newly-unveiled Asus Zenfone AR is one of the first phones that’s got the complicated chops to support all those functionalities.
The Zenfone AR has two accolades to it’s name – it’s the world’s first phone with 8 GB of RAM (although LeEco was rumoured to be releasing a phone with that gargantuan amount of RAM, Asus seems to be ready to beat them to the punch).
The other first that the Zenfone AR boasts of is being the first Google Tango-enabled phone in the world, thus beating Lenovo’s PHAB2 Pro to that punch. The PHAB2 Pro was later delayed till Fall 2016, but since there’s no sight of it, it’s clear that there are even more delays on that front)
The Asus Zenfone AR is a sleek phone that does not look like the tank-link Tango devices one expected. It features a 5.7 inch Quad HD Super AMOLED display, with an impressive 79% screen-to-body ratio, which is a great fit for Daydream.
Indeed, the phone is Daydream-ready as well!
Under the hood, it runs on Qualcomm’s Snapdragon 821 processor, tweaked to deliver the specific performance required for Tango. On board is 8 GB of RAM with a complicated set-up of cameras on the back that are necessary for essaying the Google Tango experience.
Google’s Tango project is an effort that makes it possible to create indoor, 3D maps, and it requires a set-up with three cameras: a motion tracking, a depth-tracking and room mapping one.
In Zenfone AR the camera system features the latest Sony IMX 318 sensors with a 23 megapixel resolution, in addition to a motion tracking and depth sensing camera. This allows the phone to not only track the motion of objects, but also learn an area and have an accurate perception of depth.
Currently there are over 35 AR apps available, which is clearly not enough, but it’s a start, and the focus in 2017 will be on getting more interesting AR content. Asus has already demonstrated a new way for people shop to for clothing with the official GAP AR application, which makes it much easier to see how exactly garments fit on a model.
Another device from the Asus stable, the Asus Zenfone 3 Zoom is also slated to hit the markets soon.
The Zenfone 3 Zoom still has a main camera with optical zoom like its predecessor the Zenfone Zoom from last year. However, this time around Asus went with the dual-camera solution, thus the magnification is just 2.3x compared to 3x in the original Zenfone Zoom.
Like in the Apple iPhone 7 Plus, the new device’s lenses work together to create the bokeh effect that blurs the background of your pictures while keeping the subject in focus. The zoom itself is fixed on one 12 megapixel lens, while the other lens is a 12 megapixel wide-angle 1/2.5″ unit with f/1.7 aperture and Sony’s IMX362 sensor. They both have 1.4nm pixels. Optical and Electrical Image Stabilisation systems are on board too, along with 4K video capture.
Autofocus time is claimed to be 0.03 seconds, even with a moving subject, thanks to three different focusing modes: Dual Pixel phase detection, laser, and continuous.
The phablet is 7.83 mm thin and weighs 170 grams. It has a 5.5 inch 1080p touchscreen with Gorilla Glass 5 on top, the Snapdragon 625 SoC at the helm, and a huge 5,000 mAh battery. It will become available in February, disappointingly running Android 6.0 Marshmallow.
The Asus Zenfone AR release date is set for the second quarter of 2017, however, Asus has not unveiled an official price for the Zenfone AR. And given it’s capabilities, I’m not sure price will be a major factor of concern for those wanting AR in their pocket.
With this, and other such superbly crafted and well-endowed devices, Asus is clearly speeding ahead of all it’s “budget-focused” competitors, and is clearly vying to be amongst the top 3 premium brands in the world. And it’s getting there.
Meet The New Benchmark Of External Storage: SanDisk’s 256 GB microSD Card
With CES 2017 over, our head’s buzzing with all with all that new stuff that’s coming, and some amazing announcements. SanDisk is among the latter – with the unveil of its new 256 GB SanDisk Ultra microSD card.
We aren’t excited about this only for it’s huge storage capacity but also because this is also the first microSD card in the world, that enables the storing and launching of apps directly from the card. Isn’t that cool?
Apart from being able to perform the usual stuff of expanding smartphone storage for phones or cameras (including action cameras), this new card can provide transfer speeds of up to about 95 MB per second which is super-speed class for external storage cards. Like we said, it is this speediness that enables the host device to run apps directly through the 256 GB microSD card, without dawdling.
Well, in case you didn’t know it, like other products that must meet specifications set by governing bodies of that sector, SD cards too, must live up to some regs. The new card in fact, lives up to the Application Performance requirements of the stipulated Class 1 (A1) per the SD Association’s latest, “SD 5.1” specifications. Let us explain.
The new A1 category decrees that a card’s eligibility to this uber-class tag (A1), depends on it’s ability to handle an input-output access per second (IOPS) of 1,500, along with the ability to write files at a third of that speed.
This enhanced speed is precisely what enables SanDisk to launch apps in a jiffy, even using programs filled with high-resolution graphics and audio files.
“The A1 specification will help consumers identify the appropriate card to ensure an optimal experience when running and launching apps on their smartphone. We are pleased that SanDisk will release an A1 card, and continues to contribute to breakthrough technologies enabled by the microSD format.”, said Brian Kumagai, the president of the SD Association that decides the requirements for the new A1 standard.
To put the speed factor in perspective, SanDisk’s new card has the potential to transfer 1,200 photos (of about 3.5 MB each) in a single minute (as per SanDisk)! It can also store up to 24 hours of HD video.
Just like its ancestors, this 256 GB card is drafted to be waterproof, temperature-proof and shockproof, as well as handling X-rays when going through airport security.
There is a lot that this little card can perform. But to maximise it’s acceptance, SanDisk ensure that it works with the SanDisk Memory Zone app already available in the Google Play Store. This obviously makes it easier to manage and back up content on smartphones and tablets.
Some trivia, just for trivia’s sake:
The memory card format has obviously become unbelievably popular and useful over the years – making immense contributions in the field of digital imaging (digital photography), drones, dashboard cameras and surveillance systems and most importantly, influencing the evolution of smartphones at the same time. So much so that currently around 75% of smartphone models feature microSD slots, as per data obtained from Strategy Analytics2.
“The microSD card has been an integral part of the digital revolution by providing more options for high performance, high capacity storage for smartphones,” said Dinesh Bahal, Vice President at Western Digital (who now own SanDisk). “SanDisk cards are at the center of more than two billion consumer devices, and now with this A1 card, we’re proud to play a significant role in continuing to advance the trusted format.”
So, for all the photos and videos that you want to capture on your next international trip, or to beef up that Android phone, look for this shiny new plastic. It goes on sale later in January with a price tag of USD 200.
There are tons of productivity apps available on Play Store, that are aimed at helping users minimise their work load substantially while also helping users plan their lives better.
Some of the most popular productivity apps are Evernote, Outlook and Google Drive.
There’s a new one on the Store, which is a smart time-saving app released by Samsung.
Called Samsung Focus, it is an all-in-one productivity app, designed largely for the needs of business users who virtually spent a lot of time doing their share of labour on different apps. Focus brings together a lot of these complementary things like email, memos, calendar and contacts under one roof, enabling the user through a hassle-free, streamlined experience.
We’d written about Samsung Focus as far back as May 2016 when we’d heard it was coming to the Note 6 (which was before Samsung leapfrogged the numbering chrono for their Galaxy Note series and went directly to Note7). Well, we were right about the call.
Focus sports a lot of features including a tabbed interface, support for multiple accounts, even honours keywords, and does smart things like prioritising your notifications.
To start with, unlike other productivity apps, Focus is not complicated, nor complex-looking. The app flaunts a simple yet appealing and un-congested design. The main screen shows all of your upcoming events as well as some recent emails.
You can add calendar entries and manage invites, create memos about important tasks and more, right from there. The app has tabs that carry information which are synced to each other via a tabbed interface.
There is a universal search tab option available, that primarily is a search engine which exultantly digs out information from related parts of the phone.
As I used the app, I realised that Samsung has spent quite some time understanding the nuances of work life. In fact, the next feature of Focus I’m about to showcase clearly validates it.
Considering the widespread Notification Fatigue on smart devices today (thanks to the hundreds of apps, social platforms and increasingly-mobile-first nature of business), Focus helps reduce the clutter.
Focus actually provides a summarised list of your major notifications in an easy-to-read card-like UX that can be customised to your preference and whims. You can customise the notifications according to what you wish to see, and who from.
Your VIPs (bosses, customers and the spouse) can be flagged as Priority Contacts. Notifications of activity from these VIPs can be set to different alert levels and tones.
You could even choose to be notified only about the emails from contacts you’ve flagged as important, fencing yourself from all the unwanted mess of commercial publicity emails that amass every other minute. However, it is worth noting that only Exchange ActiveSync (“EAS”) IMAP/POP3 email addresses are supported by Focus.
Another smart feature in the app is the Keyword Setup. This feature essentially lets you choose a few keywords around which the notifications of emails revolve.
For example, if the desired keywords you’ve set are “important”, “meeting”, “trip”, you’ll receive specific notifications of emails carrying those words.
Essentially, it is just another way to prioritise your alerts, this time with Keywords.
While the primary/normal view of the app notifies you of you upcoming tasks, appointments, messages, it also does something else that most other apps don’t – helps you set up a conference call with an email. Conference calls can then be easily joined simply by clicking a single button.
Given that it has access to all this information about you, your preferences and importantly, your work, Samsung has been smart to ensure that the Focus app saves all the data it gathers/uses about your life on your device itself, and does not transmit it to any Servers or external repositories.
Samsung also clarified that Samsung Electronics never shares any User Data. “Samsung Focus does not operate any cloud servers. It connects only to the actual mail servers. It stores your account’s data on the device, and Samsung Electronics never access any user data“, the company clarifies in a note in the app’s Google Play Store listing.
Okay, if you’re wondering if the app is supported on all Android phones… well, the name is a dead giveaway! The app is only supported on Samsung phones – that too, only on those that run Android 6.0.1 (Marshmallow) or above, as their operating system. Bummer! Well, another reason for you to upgrade, I guess!
Some say Focus is an app that Samsung sought inspiration from the BlackBerry Hub. Fortunately enough (for Samsung), it’s not an outright copy – the BB Hub is mostly about messages (emails and texts), whereas Samsung Focus appears to be emphasising on all things “productivity”.
But I’ll admit, it does look a little familiar (*halo shining*)
We’d informed you about the new Bluetooth version that was unveiled on the 16th of June of this year, via our very informed article available here. You should read that too, as you’d learn a lot about the newest enabler that has now been made available and should be hitting all your new devices in the coming months.
Bluetooth 5.0 is finally out in the market, ready for commercial use!
The wireless standard’s Special Interest Group (SIG) has finally adopted the Bluetooth 5.0 spec as of December first week, clearing the way for the updated technology to be used commercially.
This update to a decade old technology brings higher speeds, increased communicable range, increased message capacity, and interoperability for exchanges over the wireless network.
While at first sight, it may seem like just another one of those “updates” in tertiary technology that would have no significant impact on an average technology user, but Bluetooth 5.0 is a lot more than that.
The new network is said to deliver four times the range, two times the speed, and eight times the broadcast message capacity! In addition, it was made to reduce interference with similar technologies.
What this means is that at any given point there are more than one wireless technologies operating in any given space, say your Wi-Fi and Bluetooth in an average house at any given time, and that can affect the productivity of all of those. Bluetooth 5.0, on the other hand, is made to deal with that and work in a manner so as to reduce the interference it causes with other similar technologies, making it more productive and efficient.
The goal of this version of the wireless connection seems to be aimed at Internet of Things (IoT), the interconnected technology within a confined space. IoT by it’s very nature, tends to require strong connections that can relay information instantly across large spaces, and Bluetooth 5.0 seems to be quite capable of that.
The field of IoT is quite open for now, simply because, while people are still getting familiar with the idea of connecting gadgets and devices, like phones and speakers and laptops, they are still not too comfortable with the idea of buying interconnected locks, or thermostats, or washing machines, or dishwashers, or lights. That leaves a lot of scope to work within this particular field.
This update comes as the industry for connected devices is expected to grow massively in the coming years, estimates in the ballpark of USD 48 billion installed worldwide by 2021, according to ABI Research. One can expect at least a third of these gadgets to have Bluetooth capability.
To brings things into perspective, let’s work with an example.
The range and speed that the company is touting would mean that if you put on headphones powered by Bluetooth 5.0, and play music on your phone, you would be able to roam around your entire house without having to worry about having the phone on you, or the connection getting disconnected because of distance and range. Unlike this, right now, most Bluetooth devices get disconnected from one room to another.
Another example would be working with Bluetooth enabled gadgets. So working with connected devices within your home/workspace might just get a whole lot better.
Even with these improvements, all of which might prove to be quite significant, the technology does not seem to be very high on the power consumption; it will remain extremely low on power requirements, quite like its predecessors. What this means is that the users can expect to have a streamlined, and enhanced experience, without having to worry about the battery or the power consumption of the technology.
Now that the technology is out in the market, ready for implementation, we can expect 2017 to bring us a lot of devices that would feature the technology, especially all the flagships of the year, and the VR/AR and the Internet of Things packages.
While it will obviously take improvements in devices to be able to accommodate this new technology, and one can expect these to work out over the next two to six months, but there is also the idea that a low-key version of the technology might come to already existing devices.
The Low Energy version of Bluetooth 5 might work with any gadget running Bluetooth 4.2, 4.1 and 4.0 that has the Low Energy feature. Those that use Basic Rate or Enhanced Data Rate Core Configuration might also be compatible with Bluetooth 5.
While one can expect that in a year, practically all new phones will have Bluetooth 5.0, yet most people are still looking to understand how this technology will help boost connectivity otherwise, say in terms of IoT, or making people more used to the idea of home networks.
The updated technology opens new doors and there is room for a lot to happen.
Tale Of A Headphone - Dynamic vs. Planar, Magnetic vs. Electrostatic
It’s the best of the times to be an audiophile. Its the worst too. The market has been flooded with a smorgasbord of aural delicacies, so confusing that even Bach might have ended up befuddled – had he ever wanted to listen to his Magnificat.
One needs things to be simple. And in the world of audio, they are anything but!!
Basically, there are three types of headphones available in the market, based on the type of technology used to create their sound i.e. the transducer principle used.
Okay, don’t let your head start swimming just yet… We’ll try and keep this interesting. So, the transducer principle is the technique that’s used by headphones to convert the electrical signal from a media source (read: audio player) into sound waves that can be heard by our ears.
Currently 3 standards exist in the market:
2. Planar Magnetic or Orthodynamic
3. Stax or Electrostatic
Dynamic drivers are the hoi polloi of the headphone universe. They’re also known as moving coil drivers, and are the headphone equivalent of the full-size drivers you probably have in your hi-fi speakers or portable speaker.
If you don’t know what kind of driver your headphones use, they almost certainly have dynamic drivers. This is by far the most common style, and there’s no chance of that changing in the foreseeable future.
In this kind of driver, the signal is sent through a coil of ultra-thin wire, creating a magnetic field that reacts with a magnet that it’s set into. It’s an electromagnetic relationship, in physics terms. This causes the voice coil to rapidly move backward and forward, in turn moving the speaker diaphragm the coil is attached to.
On a hi-fi speaker this is the cone-shaped part you tell the kids not to touch (and they inevitably do). But in headphones or a small Bluetooth speaker it’ll generally be hidden behind a grille so you can’t see it.
This movement rapidly compresses and decompresses air, causing the sound waves that make up the audio you hear.
These are far less common than dynamic range ones, and also more expensive – for only few companies make them.
There are three important names in planar headphones right now: HiFiMAN, Audeze and Oppo. Hardly the biggest names, but worth taking note of if you haven’t already.
These headphones work on a principle similar to dynamic driver headphones – using the interaction of two magnetic fields to cause motion. However, instead of moving the voice coil, pulling the diaphragm in and out from one ring within the driver, here, the charged part is spread across the driver, which is a thin, largely flat film.
So instead of focusing the force on a small part, it’s spread across the diaphragm. This generally requires larger magnets than a dynamic driver array, and they’re needed on both sides of a diaphragm, which is why a lot of planar magnetic headphones are quite big and heavy.
Headphone veterans out there may also know this kind of headphone as Orthodynamic, a term popularized by Yamaha. However, that’s actually a marketing term that only really referred to Yamaha headphones.
Now we’re onto the grandaddy of headphones – electrostatics. Not because they were being worn by caveman back in year X, but because greatest headphones are made using this technology (okay I couldn’t think of a really humorous parable).
Electrostatic drivers consist of a thin, electrically charged diaphragm, typically a coated PET film membrane, suspended between two perforated metal plates (electrodes). The electrical sound signal is applied to the electrodes creating an electrical field; depending on the polarity of this field, the diaphragm is drawn towards one of the plates. Air is forced through the perforations; combined with a continuously changing electrical signal driving the membrane, a sound wave is generated.
Electrostatic headphones are usually more expensive than moving-coil ones, and are comparatively uncommon. In addition, a special amplifier is required to amplify the signal to deflect the membrane, which often requires electrical potentials in the range of 100 to 1000 volts!
Due to the extremely thin and light diaphragm membrane, often only a few micrometers thick, and the complete absence of moving metalwork, the frequency response of electrostatic headphones usually extends well above the audible limit of approximately 20 kHz.
The high frequency response means that the low midband distortion level is maintained to the top of the audible frequency band, which is generally not the case with moving coil drivers. Also, the frequency response peakiness regularly seen in the high frequency region with moving coil drivers is absent. The result is significantly better sound quality, if designed properly.
Electrostatic headphones are powered by anything from 100v to over 1kV, and are in proximity to a user’s head. The usual method of making this safe is to limit the possible fault current to a low and safe value with resistors.
I know this sounded like a physics lesson – and honestly, there’s not a much easier way to explain such stuff. But I know for a fact that if you go back and read this article again, it’ll make more sense.
Hope you enjoyed reading this article, and that you’re wiser now about the vast world of technology behind the humble headphones. I know this is just a primer of an article, but any more and we’d have both needed an Aspirin! But, if you need more advice on choosing/buying a good pair of headphones, we’d written a really nice article on the topic a few months ago. You really should read that too.
Augmented Reality (AR) (closely related to Virtual Reality, in many ways), is one of the ‘hot’ trends in the town, and it’s serious business. Facebook bought Oculus for USD 2 billion back in 2014, Google (and others) invested USD 542 million in Magic Leap, manufacturers like Asus, Huawei and Samsung launched VR headsets and Apple is rumoured to be working on VR too, Samsung is rumoured to be working on smartphones that have hologram tech and even contact lenses with built in cameras, Google created two platforms (that should imply how serious Google is about VR/AR) – it’s Daydream platform and Project Tango (which powered Lenovo’s Phab2 Pro smartphone),
Snapchat too, seems intent to cashing on the trend. Snapchat introduced World Lenses, a feature that enables the users to overlap their self-portraits and videos with animated graphics, in November last year. This cool feature expands the features of the Selfie Lenses as now, the filters can be applied to other objects in addition to the user’s face.
To substantiate, the rainbow like filter can be applied to clouds, enabling the user to effectively twist the reality of a video footage that included it. Other than that, snowflakes can be added to say, your bedroom, or floating hearts can be applied to both living and non-living things! In fact, the controversial American Presidential Elections were also commemorated by Snapchat – with a filter reminding Americans users head to the polls, on the day.
World Lenses, in the true sense of the feature set, is not restricted just to the surroundings of the user, it can also be used to animate and decorate your face too (using the selfie camera, of course)!
Snapchat in a statement said, “World Lenses will help Snapchatters decorate the world around them in even more fun and creative ways“.
This feature further ties in with Snapchat’s Spectacles eyewear launched in September 2016. For all those who are unaware of what Spectacles is, it is a $130 worth pair of glasses by Snapchat that can record 10 seconds of video clips which then can be shared via iPhone or Android on Snapchat. Since the camera on Spectacles is outward, some analysts are of the view that the usage of World Lenses can be well suited to the purpose of recording short video clips.
So far, Snapchat seems it’s cashing well on it – as of June 2016, Snapchat had 150 million daily active users and it is quite popular with the youngsters.
Perhaps this is the reason why various advertisers don’t want to miss out on this opportunity of reaching out such massive audience. This is to say that Snapchat knowing this weakness of the advertisers is developing an ad-overlay system that’ll turn your snaps into ads and help the mobile app mint in a lot of revenue.
How is this possible?
Well some algorithms here and there and voila, the task is done. By identifying the objects the users are snapping pictures of, the technology can then display ads based on the user’s activity at the moment.
Let’s say you are taking a photo of a drool worthy piece of cake, filters then might appear to advertise bakery brands or similar lifestyle-interest advertisements. Famous restaurant menus and landmarks have also been outlined as a possibility in the patent that mentions this kind of AR advertising.
Brands are splurging to put out their pricey sponsored geofilters out in the virtual space. Users then can use these branded filters on their snaps before these are put out for display to their friends and story.
The whole concept of AR advertising in itself seems to be an amazing one as it combines a real world image with a digital element, thereby blurring the lines between the two.
These branded geofilters reportedly cost anywhere from USD 250,000 to USD750,000 depending on the date, geographical coverage, and reach!
An interesting question arises here, what is in AR advertisements that pushes the brands to pay so much?
Well, the basic commonsensical answer would be the massive audience that can be reached and engaged with via the app. An average geofilter can land up a few million uses and can fish in more views when snaps are shared.
Snapchat hasn’t started this act of selling ads recently, in fact, it started selling ads back in 2014 and the cost of advertising with the app has floated upwards ever since then, being $100,000 at the minimum.
With its ads API that Snapchat launched in June 2016, Snapchat can serve ads via third-party technology companies, which plug into the app and deliver ads for their advertising clients. The API automates the process of serving ads, aiming specific groups and measuring them, and it also lends brands more flexibility on how much they can spend.
The ad software indicates that Snapchat is moving from an experimental platform to one that can handle advertisers’ needs in a better way, and one of those needs, of course, is the ability to trickle in money without too high a commitment.
So it may be fun and games and gimmicks for you, but for Snapchat and a countless others, its a serious bounty, at the end of the rainbow.
This past month, the Silicon Valley giant announced the commencement of a systematic upgrade of its apps for work, which will hence be called Google Suite. This move comes with the launch of Google Cloud, a unique and broad portfolio of products, services, and technologies.
These apps are designed to enhance the experience of the Android operating system within the workplace with Google bringing in new features and Artificial Intelligence (AI) into the equation.
First, the Google Suite. When it was initially launched back into 2007, most people dismissed it simply because using the Microsoft Office suite on your desktop was much easier than using a suite that relied on an internet browser for its functionality. Well, times have indeed changed, with smartphones and interconnected devices, we all want our content accessible and editable wherever and whenever we want.
And Google Apps (neé Suite) has been enabling exactly this proclivity a while.
But given that they aren’t the only ones – Dropbox, Apple and even Microsoft have equally strong offerings in the Cloud and Online Collaborative Apps space, Google is now introducing newer features to their Docs, backed by Google’s own forays into AI and Big Data.
The first change being introduced is the Explore button, located in the bottom-right of all your Google documents, enabling different functionality depending on the app you’re using.
It uses the data within your spreadsheet or document or presentation to provide insights real-time; Sheets will give you answers to natural-language questions asked in the tool, in addition to helping you prep for your work by keeping search topics ready for you, in line with what you’re working on and giving possible search links, and results. When you are working on Slides, this same feature will also provide pictures and possible design patterns.
If this does not work for you, they have an inbuilt search bar, which beats any search bar built into any other working suites.
These new features are being introduced across the platform. A feature called Quick Access in Google Drive on Android, for instance, uses interactions with your colleagues and your calendar to access the files most relevant to you at the given time.
Google Drive will now be better equipped to work with teams, from small things like integrating the work better, making it easier to add or reduce a team member.
Google Hangouts will be better enabled for team meetings; no downloads, no browser plugins, invite anyone, join from any device, even without an account or a data connection as every meeting generates a short link and dial-in phone number.
Googe is trying hard and brings a lot to the table, however, the fact of the matter also is that this is a market that is overpopulated.
Microsoft, of course, rules, not only with its Office but also with the mass fondness it has generated over the time it has been in the market. Apple and its partnership with IBM always keep pushing for more share, even though that has stayed more towards the elitist side.
Rumour has it that Facebook too, is planning to launch its business focused Facebook for Work, which just increases the competition (though we hear that Facebook for Work is more like well, Facebook at work – the same newsfeed and groups and other things Social).
Back to Google. In order to make an even more compelling amalgam of cohesive services, Google has also introduced its Google Cloud.
Google Cloud will encompass every layer of it’s business apps – from Google Cloud Platform to G Suite, machine learning tools and APIs, enterprise maps APIs, Android phones, tablets, and Chromebooks. The idea is to provide a robust storage solution for the workplace, which is not only dynamic to suit the ever-changing environment, but even has better integration making it compatible with it’s very diverse user pool.
Having used most of Google’s aforementioned services I can tell you that everything works and is fairly good. It justifies Google being hard at work, and is clearly helping Google make its own mark in the work environment, even though one would say that they are a tad late.
Will this pass the stringent litmus test of user adoption is yet to be seen.
The times, as they are – are certainly changing! LeEco – the Chinese major, has just become the first company to dump the traditional 3.5 mm earphone jack from its latest line-up of smartphones, and moving to USB Type-C based audio technology.
There’s a sound reason too.
When it comes to audio technology, the 3.5 mm headphone jack is possibly one of the oldest existing survivors. While the audio players have seen a sea change from cassettes, to CDs, to MP3, to Hi-Fi players – yet, the audio jack has remained all the same, unfazed and curiously unchallenged.
While it can rule roost in the analog era, most smartphone manufacturers are realising that digital-audio can do with better support equipment. In fact, if you consider it, with processors and RAM and all-metal bodies becoming commonplace, there’s very little that distinguishes smartphones from each other any more. So the battle is moving to two different zones – cameras and audio.
So LeEco, with its second generation ‘Superphones’ Le 2 and Le Max 2 that were launched recently in India, has led the revolution in the audio technology space. Using a technology called CDLA (Continual Digital Lossless Audio) , LeEco intends to revolutionise the music experience on their smartphones. And, just to be sure that it is a sound investment – the company is going to pump an investment of Rs 200 million in the industry with a motive to popularise and pioneer the CDLA standard.
In fact, so committed is the internet technology conglomerate towards this cause of popularising the new technology that it’s going be giving away a free CDLA earphone worth INR 1,990 to all Le 2 and Le Max 2 buyers during it’s first flash sale of its Superphones that’s scheduled for June 28.
So what is this technology all about? According to LeEco, “Delivering uninterrupted sound quality, CDLA improves signal-to-noise ratio from a standard best case of 60dB to 90dB. With our introduction of Type-C USB headphones, we are embedding digital signal processors (DSP) within the earphones themselves to handle the audio decoding. This results in a drastically reduced signal degradation”.
In other words, the digital signal goes straight from the phone and into the headphone’s audio processor, which decodes it, resulting in a purer sound.
The analog audio jack has indeed fallen far behind other components like the USB Type-C connector, that can not only handle high-throughput data transfers but even be cross-utilised to charge the device itself (and that’s not only phones – the Type-C can even charge laptops!).
Being a digital connection, headphones can leverage the USB Type-C port and even integrate a digital-to-analog converter and amplifier right into their headphones, ensuring consistent quality across devices.
In a 3.5 mm jack-based system, the decoder is built into the smartphone and there is no power source for the headphone or the earphone, which means there is no way the earphones can boost the audio quality to prevent quality loss.
Thus, loss usually occurs in traditional 3.5 mm headphones and earphones, irrespective of whether you are using a phone or a laptop.
In CDLA technology, the headset contains an integrated audio processing chip and a decoder which does not induce any sound quality loss.
Other than the loss itself, there are many problems in analog audio like interface noise, compatibility problem, poor sound field, noise from a connector, and etc. No matter how good the performance of the drive circuit is, as long as a 3.5 mm jack is used, there will always be inevitable losses. Coupled with the fact that users may use any possible options from a variety of earphones, it is just impossible to achieve real integration between the phone and the earphone.
Removing the dependence on the quality of the earphone’s circuitry, and moving it to a more self-contained and controllable element in the source device itself, thus will cause an automatic improvement in sound quality, which is agnostic to the earphone.
Additionally. CDLA also supports hi-fi (high-fidelity audio) which is used for high-quality audio reproduction and includes-quality high file formats such as FLAC.
Back to LeEco’s transition – all these benefits and features work with a USB Type-C based headphones that come with the Le Max2 and Le2 smartphones; but that also means that if a user decides to use a traditional 3.5 mm headset with a converter, he won’t get the same CDLA-equivalent sound quality.
In real-world field testing, the audio quality in the CDLA headset, when used with Le 2, was distinctly superior to the audio quality on the 3.5 mm headset (used with a converter).
LeEco has already launched their Superphones in China that come with standard Type-C interface, along with CDLA headphones, thus making LeEco the world’s first to launch the CDLA standard in smartphones!
While we at Chip-Monks believe that CDLA and similar music standards are on their way to redefine audio experience in smartphones thanks to breakthrough technology, intelligence and an upheaval in the supporting ecosystem, yet this ‘revolution’ will need our our open minds. And sympathetic ears.
Eggs-and-omlettes comes to mind, but since we’re talking about auditory senses, lean back, close your eyes and envisage a concert performance, hear the guitar strings, and the plectrum. If you can hear them and feel the pulse of the music, then you need better audio technology; get excited about it… it’s on it’s way to a smartphone near you!
Wireless charging is an exciting arena, but it’s proven more of an extravagance till now.
It’s nice to have, desired by many; but the current state of implementation, marks it down as something inessential.
Perhaps one biggest reasons for this has been it’s tendency to charge slower than a traditional wired connections. Which is what South Korean tech giant LG’s Innotek division claims to have overcome.
LG’s new wireless charging pad is claimed to be three times faster than the existing 5W wireless charging module, and charges up to 50% within 30 minutes. Thus, LG’s claims imply that their new wireless charging pad can equal the speeds of even the wired Quick Charging equipment!
The company says that the wireless charging pad incorporates new technology from LG Innotek that prevents the pad from overheating during charging. The wireless charging pad embeds sensors that measure the temperature and allow users to suspend charging when it reaches a certain level by just touching the smartphone.
LG believes that if the 15W wireless charging pad’s design is optimized for other applications, it can be utilized in automobiles as well as in furniture as an embedded facility. Embedding charging capabilities is not new, DuPont has already dabbled in the same earlier (we’d covered their Corian Solid Surfaces product back in 2013!).
According to LG, the new charging pad can be used with most wireless charging enabled smartphones currently available in the market and it also meets the standards of Wireless Power Consortium (WPC), which is an international standardization organization for wireless charging.
The wireless charger triples the usual 5-watt output of a regular wireless charger to 15 watts, which is what allows it to match the speed of a traditional charger while conforms to the Wireless Power Consortium’s standards. You may know this as the Qi system, which is more widely used than its competitor by the Power Matters Alliance.
Which phones will be compatible? Any phone can be made to work with Qi wireless charging if you wrap it in a compatible case, and there are some, such as the LG G4, the Galaxy S6, the Galaxy S7, and many Nokia Lumia phones, which already have support built in.
“This proves that LG Innotek has the world’s best wireless charging technology“, Huh Sung, Vice President of the firm’s Electronic Components Division, said in a statement. “As a wireless charging module is directly related to conveniences and safety of handset users, we will meet customer expectations with advanced performance and perfect product quality“.
Although wireless charging has had a slow start, it’s been picking up gradually. According to market research firm TSR (Techno Systems Research Co. Ltd.), the base unit sales were at USD 553 million in 2015 and is expected to go up to USD 2.2 billion by 2019!
While pricing is yet to be specified, the LG Quick Wireless Charging Pad is set for release in Australian, European, and North American markets later this month.
After the recent Galaxy Note7 fiasco, where its high-powered battery was deemed as one of the culprits for its takedown – the market may seem a bit cautious before considering such high-powered claims, or subjecting themselves to any risks associated with batteries and electricity and smartphones.
But the world will wait and watch – for risk is a necessary component in innovation.
Apple iPhone 7 Battery Life - Surprisingly More And Still Unappreciated
One must expect Apple critics to be pumping their fists for glory after the supposed headphone jack removal, which many naysayers suggested was the tech giant’s not-so-clean intentions regarding arm-twisting the customers into accepting the product.
Ever since we set up Chip-Monks in 2012, in fact the very reason we set it up in the first place, our motive has been to remain stoically unbiased. To remain true to customer-interest. To be clear in our thoughts (not vacillate), and yet be clearer in our support when due. We aren’t swayed by glamour, by larger-than-life propaganda, and definitely not by loud voices (whose only intent is to make noise, to be noticed).
So, we don’t support any one brand out of financial or personal interest. On the other side of the coin, we don’t berate any brand either!
The fact is, Apple, under the direction of Jony Ive, has striven to make the iPhone increasingly sleeker. This creates something of a disconnect with users who shout rather loudly that they’d be more than happy to put up with a thicker device if it resulted in improved battery life.
in fact, countless surveys over the past few years made it overwhelmingly clear that improved battery life was the most desired feature among iPhone owners, ranking far above design features like thinness!
Nonetheless, Apple’s near-obsession with device thinness marched on unabated, prompting some to wonder if Apple had completely lost touch with its user base.
With the iPhone 7, however, Apple has delivered the drastic improvements to battery life that users had been seeking for years. And, they’ve done that without making the iPhone gain any girth, at all.
Curiously, the iPhone 7’s battery life almost seems like a complete non-story as all anyone can seemingly hear being spoken about is the missing headphone jack or the curious design of Apple’s AirPods.
According to Apple, processor improvements and bigger physical batteries have resulted in the longest battery life ever in an iPhone. Those upgrading from an iPhone 6s to an iPhone 7 will see two hours of additional battery life on average while iPhone 7 Plus users upgrading from an iPhone 6s Plus will see at least an hour more battery life.
All said and done, in most cases, users will see much greater increases in battery. 3G and LTE browsing improved by a solid 20% while Wi-Fi browsing increased by an even more impressive 27%. And rounding things out, video playback increased by a respectable 2 hours.
Interestingly, the iPhone 7 marks the first time that Apple managed to increase LTE browsing time on the iPhone since the release of iPhone 5! On the iPhone 7 Plus, users should be able to enjoy an additional hour of battery life relative to the iPhone 6 Plus even on LTE.
So far, we’re impressed with the iPhone 7’s battery life during real-world usage; it is at least 12 hours of engaged use, two more than the iPhone 6s.
Physically the iPhone 7 battery grows to 1,960 mAh, up from 1,715 mAh in iPhone 6s. The iPhone 6s tended to end the day at about 30% remnant charge, when it was new (after one year, it’s at 10% remnant charge), and the iPhone 7 is currently ending the day at about 44% battery life on most days.
We still end up charging it every night, so there’s no real change in habit, but it’s less stressful to have plentiful juice when you’re driving home or watching some videos to unwind, while on the Metro.
Given that the iPhone 7 has a 1,960 mAh battery, we’d hazard a guess that Apple wanted the battery case (iPhone 7 Smart Battery Case retailing at $99 on Apple.com) to actually be able to fully charge your iPhone. Apple says the smart battery case combined with your fully charged iPhone can give you 22 hours of internet use over LTE or 26 hours of talk time, which is enough juice to get even the most ardent user through an entire day.
It’s worth noting here that while Apple has managed to improve the iPhone’s battery life, it hasn’t sped up the charging process itself.
Some Android smartphones, like the Samsung Galaxy S7 and OnePlus 3, support Quick Charging technology; While OnePlus says its phone can recharge up to 60% after being plugged in for just 30 minutes, HTC has been promoting their Quick Charge functionality which supposedly works even faster.
Looking at the battery specs on the recent Android phones many wonder why Apple isn’t making bigger batteries – well, Chip-Monks surmises that current battery tech isn’t allowing commensurate growth of charge retention. Also, there are legal restrictions on the size and wattage of a battery that more Airlines and Aerospace Governance bodies have mandated for portable devices (which most people aren’t aware of). But that’s another story for another day.
Closing up on this story: Apple’s done a lot to improve the battery life on the iPhone 7 and iPhone 7 Plus, and they’ve done it the smart way – without busting implicit weight and dimensional expectations of their users.
Xiaomi, a company that has already taken the world by storm since it’s market reveal just a few years ago, just made another bold move.
They introduced the ‘Tap To Pay’ feature on their devices, making money transactions of all kinds simpler for those who do not like to carry cash.
They call it the Mi Pay, and for now, but it is available only in China.
The concept of ‘Tap To Pay’ is not at all new to the market. Apple (with Apple Pay) and Samsung (with Samsung Pay) have had it for a while now and are both doing reportedly well.
Certain third party banking and payment services (say like Android Pay) also have various versions of tap to pay or ‘scan’ to pay, in the market.
Yet, Mi Pay does something that makes it unique since neither of the two big-wigs has come up with yet.
Before we get into that, however, let us first discuss what Mi Pay is all about.
Okay, so it’s a ‘Tap To Pay’ service that works pretty much like a credit or debit card does; instead of swiping the card, you are tapping your device (that has been previously set up for such transactions).
It works through an app, with which various cards and bank accounts can be integrated, and once set up, it can be used for any and all kinds of payments.
In the event that your integrated device is lost, one can expect a good amount of security, through passwords and fingerprints, of course. One can additionally turn off the payment service through the corresponding website.
Launched on the 1st of September, 2016, the service is available on Xiaomi phones that have an NFC chip. Currently, the Mi 5 is the only Xiaomi phone that has the required chipset integrated within, to support this service.
But we can expect the upcoming Mi Note 2 to feature it as well, along with a possible expansion to other devices.
“We believe that Mi Pay will be a key driving force in promoting the development of China’s mobile payments industry, and deliver much more convenience to our users“, Xiaomi founder and CEO, Lei Jun, said in a statement.
A strong competition from Apple Pay and Samsung Pay is obvious, but amongst home-grown competition, Xiaomi can expect to have to battle Alibaba’s Alipay, and Tencent’s WeChat Pay.
To make it all happen, Xiaomi had to work ties with over 20 banks, including Bank of Communications, China Construction Bank, China Merchants Bank, Huaxia Bank, Industrial Bank, Minsheng Bank and Ping An Bank. But the ties with the banks were of course not enough.
To widen its reach, Xiaomi, like Apple and Samsung, signed up with UnionPay. China UnionPay is the country’s largest interbank network that facilitates fast and smooth transactions between various points. They reportedly have about 5 million contactless point-of-sale terminals across the country. Working with them would enable Xiaomi to get the widest possible reach, and ensure that all avenues of growth are open to them.
Finally, let’s tell you what’s unique about Mi Pay!
Well, Mi Pay users would also be able to use it to pay at certain public transportation platforms. The obvious question is if ‘Tap To Pay’ can be easily used at a store, then why not on public transportation?! Well, payment methods on public transportation systems are extremely complex and are different in each country, if not in each region. That makes it almost impossible to design something that would work everywhere, especially because of the variety of NFC chips that are used in different systems. Apple, for instance, plans to embed a new chip in their iPhones sold in Japan to enable the integration with the Japanese system. In Singapore, users need to get a new SIM card with a chip embedded on it, to enable them to use such a service on the trains.
What Xiaomi has done is that to make this a little simpler, they have enabled the users to add their transportation cards to their ‘Mi Pay’ accounts, and that is what facilitates their transactions.
So, with Xiaomi’s ‘Tap To Pay’ operational even on public transport, all they have to do when entering a bus is tap their phone, and voila, the payment is made. This service, for now, is only available in six Chinese cities – Shanghai, Beijing, Shenzhen, Guangdong, Suzhou and Wuhan – but it would be safe to assume that Xiaomi is planning to expand it soon.
Xiaomi runs neck to neck in the Chinese market with several international brands. What would be interesting it to see how this pans out for all of them.
With the increasingly competitive fight for smartphone supremacy and the media frenzy surrounding it scaling new heights everyday, it is not surprising that consumers are gradually starting to take the technology for granted.
While everyone knows the resolution of their screen, or whether their phones are running the latest versions of their respective operating softwares, critical parts of the phone often forgotten are the hardworking sensors. In fact, we think they’re our smart devices’ unsung heroes.
This article’s our ode to to the elves of the smart devices universe.
Smartphone sensors, much like any other sensors we would come across, have the ability to take physical quantities and convert them into readable signals. These little midgets collate and provide a mindboggling amount of disparate information and provide that to the OS and Apps, that are then used, to provide an amazingly wholesome experience with our devices.
An average smartphone today has upwards of five built in sensors functioning at any given time. We’ll analyse a few of the most important ones:
Probably the first motion sensor to be integrated into phones, the accelerometer allows the phone to sense the direction in which it is being used. This information is then transmitted to the screen, which then changes direction to give us a more comfortable view.
The accelerometer works on the principle that if an object is allowed limited free movement in a specific space, and the space is then accelerated, the acceleration can be measured if we can by some means measure the distance by which the object moves.
Consider a small rubber ball suspended by a spring in a long container. Move the container upwards, and spring is elongated by a certain distance before it settles down. Measuring the distance by which the spring was elongated will give you a measure of how much you accelerated the container by.
The working of the accelerometer MEMS chip.
In your phone however, the ball and spring is replaced by flexible silicon, which bends on acceleration, with a base attached to the phone acting as the container. When the thin silicon strips move between capacitor plates placed around it, the change in charge causes a current, the magnitude of which can be measured to give us the direction in which the phone is being accelerated.
The accelerometer sensor is used not only to change between portrait and landscape views but also in a number of games and fitness apps. It is the accelerometer sensor that allows the smartphone to measure the number of steps you’ve taken or how long you’ve been walking.
A gyroscope allows a smartphone to measure and maintain orientation. Gyroscopic sensors can monitor and control device positions, orientation, direction, angular motion and rotation. Used in combination with an accelerometer, smartphones now measure movement along six axes, allowing unknowing consumers to enjoy applications such as driving games without the knowledge that these sensors are some of the most complex in the world.
The working of a gyroscope
Its working is very similar to that of the accelerometer and it also makes use of MEMS chips. Unlike a traditional gyroscope however, the MEMS gyroscope does not use a rotating disc to measure orientation. MEMS gyroscopes use the principle that a vibrating object tends to continue vibrating in the same plane as its support rotates. In the engineering literature, this type of device is also known as a Coriolis vibratory gyro because as the plane of oscillation is rotated, the response detected by the transducer results from the Coriolis term in its equation of motion.
3. Proximity sensors:
Proximity sensors enable the smartphone screen to power down when you bring the phone close to your ear when taking a call. This not only prevents any unwanted input when the screen touches your ear, but also helps save battery.
The working of the proximity sensor.
In its working, the proximity sensor is much simpler than the previous two sensors we have examined. Usually located near the speaker of the phone, the sensor functions by emitting infrared rays and then checking for their reflection. If the IR rays are reflected within a certain distance, (generally about 2-5cm) the sensor is activated and it responds by turning off the screen.
4. Ambient Light Sensor:
Ambient light sensors are primarily battery saving tools. The theory is that as it gets darker around us, the brightness of the phone screen required to make it comfortable to use also decreases. By decreasing the brightness of the screen whenever we move indoors or under shade, the smartphone saves battery.
Ambient light sensors use photodiodes to function. These repurposed LED’s create a current when exposed to light; brighter the light, the higher the current they produce. The current is then converted into a signal which indicates to the smartphone what brightness the screen should be operating at.
This sensor is normally located near the proximity sensor on the front face of the smartphone.
5. Camera Sensor:
For years manufacturers have been misleading consumers by quoting a high number of megapixels in their camera, with unknowing consumers almost always taking the bait. One aspect of the camera which is often wrongly ignored is the camera sensor.
The camera sensor is what determines how much light is used to create an image. The sensor consists of millions of light-sensitive spots called photosites which are used to record information about what is seen through the lens. The two main types of image sensors used are the CMOS sensor and the CCD sensor.
A CCD sensor consists of a large number of small cells which act as analogue devices. When light strikes the chip it is held as a small electrical charge in each photo sensor. The charges are converted to voltage one pixel at a time as they are read from the chip. Additional circuitry in the camera converts the voltage into digital information.
As for the CMOS sensor, A CMOS imaging chip is a type of active pixel sensor made using the CMOS semiconductor process. Extra circuitry next to each photo sensor converts the light energy to a voltage. Additional circuitry on the chip may be included to convert the voltage to digital data.
One could infer from the above information that a larger sensor is synonymous with better photos, but this over simplifying an extremely complicated piece of technology. Sure, a larger sensor would help, but a large sensor without anything else wouldn’t. Good photo quality is the product of a balance of efficiency of sensor technology, lens quality, image sensor size and ultimately what you want to do with your photographs.
GPS or Global Positioning System, is one of the older pieces of technology to be integrated into smartphones. The system functions using an antenna placed in your smartphone, and thus locates your smartphone on the basis of the satellite interaction.
The working of GPS.
When the GPS of your phone is turned on, the GPS antenna sends out signals to various satellites. On establishing communication with about three satellites, the phone can give you a fairly precise estimate of your location.
A new introduction to GPS technology is something called A-GPS or Assisted – GPS. A non A-GPS device may take up to several minutes to locate the satellites nearest to it. A-GPS speeds up this process by giving the device access to satellite almanac data over the cellular network, so the GPS receiver can immediately know where all the satellites are.
The location of the antenna generally varies from smartphone to smartphone. In the iPhone, the antenna is placed on the lower back of the device.
7. Pressure Sensor:
One of the latest sensors to be added to smartphones is the pressure sensor. Like the accelerometer and gyroscope, the pressure sensor works using an MEMS chip.
The working of the pressure sensing MEMS chip.
In this case, the chip consists primarily of a diaphragm which bends on application of pressure. The measurement of this bending allows the chip to measure the pressure and then transmit the required data to the phone.
Though a relatively new addition, this sensor is already seeing many applications in smartphones. Measuring pressure allows the phone to roughly calculate the height at which the user is present, which allows for more accurate GPS. Apart from this, the sensor is also seeing wide scale application in new apps.
Amazingly, MEMS makers are still looking to expand on these sensors, with humidity and temperature sensors also already in the works. As more of these sensors are integrated into phones, there is no doubt that the user experience will continue to improve and the devices will become more interactive.
I know it’s been a long read, but hopefully you’ll have learnt a lot more about the world’s current unsung heroes!
The world is undoubtedly contracting every passing month – as everybody is connected to each other in the world of Facebook, Connected Devices and via the Internet of Things.
Smartphones, smart TVs, internet enabled surveillance cameras, cute and interactive kids’ toys through which children can send electronic messages to their parents, are all smart appliances that, just with an application installed on your smartphone can be controlled directly through the smartphone.
This obviously has numerous advantages such as being convenient, not requiring the addition of any extra cameras, microphones, scanners, sensors, cards, wearable tags to an ecosystem of devices which is over-populated. Basically all things good and hassle free.
Technology being the double edged sword that it is though, all these connected objects are significantly risky as they are prone to hacking and can pose a real threat to one’s security and privacy.
Instances from the real world to back this premise are numerous, security cameras used to spy on users being one of them.
The only way out of all this is to improve the modes of authentication to establish a secure, smart network in the future. While there are many methods like facial recognition, gait recognition and using biometric data like fingerprints that are prevalent or are being proposed; they are still bound by limitations, in the sense that these methods require additional scanning devices like scanners which in turn, are again prone to hacking.
However, a team of researchers from China’s Northwestern Polytechnic University have proposed a seemingly better alternative to Internet of Things authentication.
If reports by Motherboard are to be believed then the method dubbed as FreeSense uses near-ubiquitous, radio-wave frequencies (RF) of WiFi to identify individuals. This is done by locating the unique perturbation pattern produced when the individuals move around and intrude these signals, as per their unique body shape and gait.
You must be wondering as to how does this system track unique perturbation patterns.
This is done by tracking changes to a WiFi signal’s channel state information (CSI) as it disseminates through the space between a transmitter and a receiver.
“Due to the difference of body shapes and motion patterns, each person can have specific influence patterns on surrounding WiFi signals while she moves indoors, generating a unique pattern on the CSI time series of the WiFi device”, explained the researchers in their paper. “FreeSense…is nonintrusive and privacy-preserving compared with existing methods [of human identification]”.
The team of researchers also elaborated on the mechanism of FreeSense: “Specifically, a combination of Principal Component Analysis (PCA), Discrete Wavelet Transform (DWT) and Dynamic Time Warping (DTW) techniques is used for CSI waveform-based human identification”.
The researchers conducted an experiment whereby nine individuals were tracked in a furnished, 322-square-foot smart home environment, making use of a conventional WiFi router and laptop. The researchers initially prepped their system with data garnered from each each individual’s gait as they walked across the room in a straight line. Next, the researchers repeatedly made the subjects participating in the test to walk across the room in order to figure out the accuracy of the system in identifying each individual.
The system performed unexpectedly well. With just one person in the room, it was capable of differentiating the individual 75% of the time; in case of two people the accuracy level went up to 95% of the time. when the number of people inside the room was increased to six, FreeSense was able to successfully identify each person with the accuracy rate of 90% – which is unbelievable, one must say – keeping in mind that some individuals may have been harder to differentiate with others if they happened to have a similar gait and/or body shape.
The researchers are now putting their efforts in the direction of increasing the performance of the system in various scenarios.
Identification through Wi-Fi seems like a good solution, although we can expect other better proposals to come along in the next few years.
Use of Wi-Fi signals for authentication seems to have some benefits like it is a convenient, device-free alternative which doesn’t require any extra devices like cameras, scanners, microphones, wearable tags, sensors or cards, instead utilising the already existing WiFi infrastructure to function.
Also these persistent WiFi signals don’t require adequate light and line of sight to aid the identification of users. It is definitely less intrusive and tiresome as compared to other methods like facial recognition and fingerprinting and looks suitable for use in domestic scenario, or for small-scale uses like in homes, or in assisted living environments equipped with smart appliances and technologies.
Samsung Wants Your Galaxy On Another Cloud
It’s been a sore point for a while – at Samsung, and with it’s users.
Apple has iCloud, Windows has OneDrive, and until now, Samsung users were dependent on either Google Drive, or third-party cloud storage services like DropBox, Box, Mega, and even OneDrive (since it is also available as an app across different operating systems).
Samsung Cloud, is, as the name suggests, a cloud storage service for specific Samsung Galaxy devices. While it is currently restricted to the Galaxy S7 and Galaxy S7 edge, it’s a safe bet to make, that all Galaxy devices going forward, will enjoy the Samsung Cloud’s company.
It currently offers 15 GB of free storage space, beyond which it becomes a paid service (a lot like Apple’s iCloud – which gets you 5 GB of free space, and then becomes paid). Once connected to a compatible Galaxy device, it backs up your documents, pictures, files, and other sorts of content on its own. It also backs up some native apps including Contacts and Calendar and certain third-party apps.
Undoubtedly, the Samsung Cloud is an extension of the company’s Smart Switch service, which is meant to enable users to easily switch data from one device to another. Take note though, we mean another Galaxy device. Samsung’s site clearly states: “Samsung Cloud can only back up, sync and restore data across compatible Galaxy devices and cannot be used to transfer data from non-compatible devices“.
Users basically have the most recent copy of their data on the Cloud, which can be used as storage, or as a medium to transfer the data onto another device. This is where the additional features kick in when you try to transfer the data onto a new device. The home screen and user settings, including layout settings and shortcuts, are also backed up, which means that when you log into the new device, it feels familiar instantly. Your photos will sync, with notes, calendar, contacts, everything you can need, will sync with the new device!
One of the best features of Samsung Cloud (much like with the iCloud and OneDrive), the same Samsung Cloud account can be used with and connected to multiple devices. What this basically means is that all your data is collected in one place, and can be accessed easier, as long as you’re using the same Samsung Cloud account.
There is also an Auto Back-up feature, which enables automatic upload of the device’s state through a Wi-Fi connection every 24 hours. To ensure that this doesn’t hamper your usage of the device, this happens only when the smartphone’s screen is turned off and it has been charging for at least an hour.
One wouldn’t really say that something called Cloud Wars exist, mostly because there is simply no way to be the best Cloud storage unit. Some of the offerings are partially free, some entirely free, while some are completely paid. Most Cloud storage units are linked to certain brands and work as automatic out-of-device storage units, or backup storage units for them.
In such a scenario, what highlight a Cloud storage facility is the features it provides, and the pains it takes away.
This last bit, is where it becomes easy to say (and justify the lack of a clear winner) – there is no single service that currently exists that does everything well, across OS’, to be called a Winner. Each of the services we (Chip-Monks) has tested and experienced falter or have some lacunae or the other.
Samsung Cloud storage is for now, being appreciated a fair bit, with most critics and reviewers stating that it is “how an Android backup and restore system should be done”. Given all that Samsung has to offer, that sounds about right; but in the extremely restricted space of two compatible devices!
Samsung Cloud was introduced along with the unveiling of Galaxy Note7 last month. It is available on Note7 and users will be able to use it right out of the box.
Industry conjecture claims that people can soon be expected it to be made available for the Galaxy S6 series, and the Note 5 as well, however we at Chip-Monks read things differently.
We really doubt Samsung is going to extend this backwards – since it is an extremely potent tool to convince customers fence-sitting their upgrade decision currently, to finally make the jump. Why squander that trump card?
Update: The update for the S7 series – which brings them the Samsung Cloud – weighs about 150MB, and started rolling out in Italy first. The update also brings in a new Gallery app along with August security patch, as well as some performance and power consumption-related improvements. The update also includes fixes for cover recognition and flashlight/torch-related issues, in addition to these.
Recently, the update has reportedly started rolling out in India as well, starting this week. As it happens with every OTA, it takes a while for the roll out to entirely happen, but just in case you get impatient, you can always check for the update via your Settings menu.