Apple Falls Short Of Chips For iPhone 8, Ropes In Its Oldest Foe.
Earlier this year we’d written about how Apple seems to keep veering back to “arch-nemesis” Samsung every time it found itself in trouble on the Supply Chain front.
It’s a funny relationship – highly publicised animosity, lawsuits that leapfrog one to the other, and products that are clear competitors to each other’s wares, yet ape each other unabashedly – and yet, Apple and Samsung seem to find the same bed in times of dire need.
First, Apple purportedly turned to Samsung for the OLED screens that are expected to glam up the upcoming iPhone 8, and now it seems Samsung’s going to be providing another key component of the same iPhone 8 – the solid state memory chips!
Well, the three new iPhones – the iPhone 7S, iPhone 7S Plus and iPhone 8 (which is what the blogosphere is calling them) – expected this fall, are rumoured to come with 3D NAND chips for storage.
Apple has been using that tech since last year’s iPhone and it is one of the many reasons that the iPhones 7 are so fleet-footed. Word now is that Apple’s suppliers have failed to meet the demand needed for this year’s production numbers. The technology is still fairly new, and the reasons for the delay are not entirely articulated by Apple or anyone else in the know.
Word also is that Apple is being forced to turn to Samsung, its fiercest market rival, for satisfying Apple’s 3D NAND chip thirst.
So you know and understand the implications, I must chalk up a quick Electronics 101 lesson.
NAND memory is a type of non-volatile storage technology that does not require power to retain data and is used in almost all forms of solid-state storage. NAND memory is the secret sauce that resides in devices onto which large files are frequently uploaded and replaced – like your MP3 players, USB drives and smart devices.
As with every other piece of technology, NAND technology too, is being improved year after year, to accommodate more storage capacity, faster transmissions and to reduce the voltage demands of the memory – which leads us to… 3D NAND technology.
3D NAND uses a method for packing a much higher density of transistors in a similar volume of space as the NAND, by stacking memory cells vertically in multiple layers. Obviously, the charm is that this tech allows more memory in the same footprint; with devices becoming more and more svelte, manufacturers (especially those form factor-focused ones like Apple) it’s obvious that such growth in technology be harnessed as quickly as it can be mass-produced.
If all this tech talk sounds like mumbo-jumbo to you, all you need to take away is that 3D NAND tech costs less per GB of storage, reduces power consumption, boosts reliability and provides higher data write performance, and all which are the functionalities you’d want in your next big flagship, especially if you’re Apple and even more so, if you’re trying to do something special on an upcoming Anniversary Edition.
Back to the story at hand, word is that Apple’s primary suppliers of 3D NAND chips, SK Hynix and Toshiba, have fallen short by about 30%, due to poor yield. With the expected launch of September getting closer and closer, a bottleneck on the supply front is certainly not desirable for the iPhone maker.
Turning to Samsung was the obvious choice.
No details about how many chips Samsung would be providing Apple with have been released yet. As per reports, Apple currently buys up around 18% of the world’s supply of NAND chips. And if the iPhone 8 turns out to be a success, this percentage could increase.
Good thing Apple has the money to secure supply, even when production is not going as smooth as everyone would like for it to!
However, Apple’s not the only on in a jam. The problem of acquiring 3D NAND chips will be an affliction even for other players in the market – such as LG and Huawei whose products are standing in line for their quota of synapses. The word is that vendors are struggling with a lack of 3D NAND flash chips globally, and the shortage is not expected to improve until 2018.
While all this sounds gloomy and forbidding, it’s actually a manna from heaven for Samsung.
Beleaguered by the hole in revenues caused by the Note7 implosion last year, billions of dollars lost in PR and brand image, poor customer confidence that spilled over to other products in it’s arsenal and most embarrassingly, corporate troubles, Samsung had been drowning under an unprecedented wave of dip in it’s fortunes. This was the shot in the arm that Samsung needed.
And they seem to have grasped this opportunity with both hands.
The humungous success of the Galaxy S8 and Galaxy S8+, along with this surge in the 3D NAND chip demands for which Samsung is seemingly becoming the knight on a white steed, has helped Samsung refill it’s coffers and win back some bragging rights in the industry.
So if you’ve read about Samsung’s own estimation of a record quarterly operating profit for April-June, you know where a lion’s share of it came from. What’s even better is that analysts say that this revenue stream will continue to pad Samsung’s margins for the rest of the year.
At Chip-Monks, we applaud Samsung’s tenacity and especially love that Apple’s still got it’s humility about it, even if it’s all in the interest of profitability! Who cares? Partnerships that heal are inspirational.
Are We Really Prepared For The Era Of Digital Globalisation?
Globalise, as defined as Merriam-Websters dictionary means “to make (something) cover, involve, or affect the entire world”
Digital Globalisation thus implies the creation of systems, platforms and capabilities that would involve or affect the online world, in its entirety.
Back in the day, globalisation was simply the efficient movement of goods, money, information, and people across borders. Today, with connectivity having been built into everything, borders seem to have evaporated. Automation and massive technological revolution that is under way only feed further into the belly of this ever growing giant.
Much like any other revolution, the digital globalisation too, delivers many positive socioeconomic benefits to our lives. Yet, it is also causing considerable disruption and inequalities that must be addressed. While in developing nations it has already been responsible for helping move a billion people out of poverty and into the global economy, it has also been responsible for widening the gap between the winners and the losers in the labor market.
The south side of this equation has been running us into a unique problem: Skilled workers benefit from expanding global opportunity, while manufacturing employees suffer due to automation and outsourcing.
Globalisation is blamed, often, for the displacement of skills and labour. The skepticism prevails that global commerce and technology advancements will never be able to create real new economic growth, but will rather hurt existing commerce.
Even though we admit that what we are saying is a gross oversimplification of what’s actually happening, we contend to one thing: If we do not quickly switch from “doing the task manually” to “tasks being done with the help of automation and organisation“, we run the risk of continued economic stagnation, increasing social inequalities and a more insular society.
The way we connect with the world at a breakneck speed has changed our lives entirely. Keeping tabs on whatever is happening throughout the world, doing everything from banking, to shopping, to booking tickets, to storing photos, to looking for jobs, online, e-commerce, and such things, have now become almost given skills, things you ought to know how to do. But those are the things you see on the consumer end of the equation; the other side of the equation is an entirely different ballgame.
The fact that the next wave of innovation is already all set to marry mass global connectivity with big-data analytics and artificial intelligence, only makes the risk of continued stagnation worse. This is likely to result in an upstream movement in the labor force from administrative and manual workers to professional trades and knowledge workers, which will change the world even more than the digital revolution did when it hit three decades ago.
Thus we come back to our primary question: Are we prepared for an era of Digital Globalisation?
Not adapting to these changes fast enough is going to leave us in a very complicated spot; these new technologies that could create huge improvements in health care and education, and birth entirely new businesses and economic growth. These are things that will bring many more people out of poverty and continue expanding the global economy, which are all ultimately good things, but only if we are able to adapt to them.
The first thing we need to do is admit, that we are not yet ready for any of this. Now once we’ve said those hard words to ourselves, we can move on to the next one. The next thing to do is do a better job of preparing for, and limiting, the known negative impacts of this disruption. Amongst other things, this would require that we increase infrastructure investment, work towards striking more balanced trade agreements. and foster an environment more conducive to new business creation.
The next critical thing to do is address the underlying cause of our inability to adapt our workforce to changes in technology.
What we need to do is change that, by adapting, retraining and redirecting our labor force. This calls for a fundamental redesigning of our education and career systems, which currently are built around a legacy of the previous industrial model and its needs.
Back when we were faced with the Industrial Revolution, what we did was redesign our education systems to make our youth capable to work with the changes. The “high school” ideology came into existence just to migrate the common people from the agricultural sector to the industrial sector with the Industrial Revolution.
Something similar is once again critical to address the shift in required skill sets. What must also be important to our understanding is that training only the youth is not going to be enough; the pre-existing workforce must also be retrained, so that the emphasis can be laid on retaining an existing, more mature workforce.
These new skill sets will need to be grounded in the practical realities of less actual “doing” and more “organising.”
What is also going to be critical is fostering the development of softer, more creative, social-orientated skills such as teamwork, judgment, agility and adaptability. All of this will require a never-before-seen collaboration across governments, corporations, and educational institutions.
But the truth stands: it will all be worth it, if done right. It will provide us with the greatest opportunity for improving the quality of lives, achieving greater equality and driving economic growth on a global scale.
We need to invest in our people, and given the way the world is moving digitally, that is not optional anymore. We are not ready for an era of digital globalization, and we must work on that, starting now.
Meet Cortica: An Israeli AI Company That's Teaching Machines To Observe And Reason, Like Humans Do
The human brain processes all information via electrical impulses. You knew that, right? Well, that is exactly what inspired Igal Raichelgauz, CEO of Cortica, an Israel-based Artificial Intelligence startup. He saw the human brain as an electrical circuit and set out to replicate that circuitry to create an AI-based capability that would endow machines with a similar skill set.
Cortica wanted their AI to have a sight sense on par with that of humans.
And we do indeed have an astonishingly complex sight system – everything you see with your eyes, open receptors in your eyes convert to electrical signals. All that information is transferred by those signals, to a part of your brain which sorts and analyzes the color, depth, shape, and size of all those objects. This data is then received by the cortex – the part that most interests Cortica.
Remember poststructuralism? For those of you who need help with that preface, you only know a table as a table because you see it in relation to a chair. If the chair didn’t exist, how would you know what a table is, what it’s used for?
Something similar happens in your visual cortex. It classifies all the objects you see into different categories by assessing them in relation to all the objects you’ve ever come across.
That’s how you know what you just saw was a bird, or a bottle, or your friend, or anything else.
Sure, you know how little time it takes for our brain to perform the entire process since you experience it every waking moment of your life, but have you ever stopped to wonder, to revel or to acknowledge the sheer speed and processing power behind it?
You know what you saw the moment you saw it. Cortica believes it has reverse engineered this process, replicated the biological visual cortex of humans.
Guess how they achieved that?
They worked on a piece of rat brain, a piece that is still living. Yup, you read that right!
The brain gave them access to the electrical interface of all the neurons contained in that tissue. They were able to understand the input-output process of the neurons. They discovered that with some modifications, a neural network could create a “conceptual signature” – without any prior training. It would be able to recognize similar objects, and differentiate them from others.
Such an AI would be able to learn by itself, much like babies do – by observation and reasoning. While we observe and learn from the world around us, it would do the same from the data available on the web.
This is Cortica’s own, unique approach to what is called ‘unsupervised learning’ within the field of artificial intelligence.
Just so you’re on the same page, there are 3 kinds of machine learning – supervised, unsupervised and semi-supervised.
Supervised learning is when you teach the AI from a pre-determined data set, so you already know the output. This is the most commonly used one.
Unsupervised learning is when you give the AI no prior training, and you tell it to solve the problem with only the necessary input. The output from such an algorithm is unknown. For instance, you want your AI to categorize certain geometrical shapes into matching groups.
If you’re using supervised learning, you would have taught the AI about circles, squares, hexagons etc. before giving it the problem. In unsupervised learning, however, you would teach your AI nothing before asking it to solve the problem. It would see the various shapes, categorize them based on similarity, and give its own label to them. This is a process much more difficult to teach an AI.
Semi-supervised learning falls between these two. The AI would have an incomplete set of reference data, and it would hazard the best possible guess based on the limited data it has, and it’s own abilities to extrapolate the data.
Now do you see the ramifications of what Cortica has achieved? Two words – it’s huge!
But Cortica isn’t completely done yet. There’s still time before the technology enters the consumer industry, but Cortica claims to have created an AI that can see and process information like humans.
So many possibilities!
Self-driving cars have already entered the marketplace. But imagine if they could actually recognize and understand what an object or obstacle ahead of them is. The car would stop by itself if it sees a pedestrian crossing the road, thus preventing many road accidents.
The might be able to recognise accidents on the road and could call for help independently.
Your smart home gadgets would revert to the settings that are specific to you when they see you approaching. An air conditioner could increase the temperature if it sees a child in the room, so they don’t get cold. The refrigerator could detect what groceries are finished up and remind you to get more.
Amazon’s grocery store in Seattle is already automated, but what if it could actually see you? That would even remove the need to even scan the app at the entrance. You could just walk right in and it would recognize you from its database, and be able to process you, and your purchases independently and accurately!
The possibilities are truly endless.
Other AI startups such as DeepMind, RealFace, and Genee have been acquired by Google, Apple, and Microsoft respectively. Would Cortica too become a target to be acquired, or would it be able to hold its own against them? Its technology certainly looks powerful enough.
The world is changing, friends. Get ready to see it differently, soon.
Jigsaw is an incubator at Google’s parent company, Alphabet, that is tasked with what is perhaps the most difficult of the jobs under that roof – solving the thorniest of geopolitical problems that emerge online.
The team is a think tank with the goal of fighting the unintended consequences of technological progress that have lately been raising a lot of concern.
What’s The Buzz About?
Social media platforms have been garnering a lot of negative attention over the last year – for things they never really anticipated their platforms would be used for. Under the pump for their (albeit unintended) role in the proliferation fake news, being carriers of extremist content, to toxic and hate speech, the dark side of the being internet has had these internet majors squirming of late.
With governments brewing laws to pin the companies down for what happens on their platform, fines piling atop each other and most importantly, the platforms becoming suspects of sorts, after any big incident – the pressure on social media companies has been building.
Google, Facebook, Twitter – each of them is struggling to keep it’s heads above the water. And everyone has been peddling really hard.
Consequently, Facebook and Google have made fundamental changes in their frameworks over the last few months. Twitter is still testing out new coping mechanisms every day, trying to gain a handle on the issues.
So, What Is Jigsaw And How Might It Help?
Google Ideas was born around the turn of the decade, when Eric Schmidt (then Google’s CEO and currently Executive Chairman of Alphabet) approached Jared Cohen (formerly with the Policy Planning Committee at the US State Department), with the idea of a “think/do tank”.
A team of Google’s engineers, research scientists, product managers and policy experts were handpicked and tasked with the moonshot goal of dealing with the unindented uses and consequences of the advancement of the internet.
Back then, the major concerns were cyber bullying and cyber-censorship.
Those goals have since expanded (and how!).
In February 2016, Google Ideas metamorphosed into a technology incubator named Jigsaw, when Google became Alphabet Inc.
What Are They Up To?
Jigsaw has come up with a unique approach – talk to the people causing the trouble.
No, don’t get us wrong – they do not plan to sit the recruits of ISIS down in an attempt to talk them out of terrorism. They instead, want to focus on understanding why it is that these people act in such extremist ways and how it is that their actions are enabled.
With that understanding of the why and the how, Jigsaw then plans to create mechanisms and tools to combat these problems.
Jigsaw has been talking to fake news creators, jihadis, and cyber bullies so that they can understand their motivations, processes, and goals.
“We look at censorship, cybersecurity, cyberattacks, ISIS – everything the creators of the internet did not imagine the internet would be used for”, explained Yasmin Green, one of the leaders at Jigsaw.
With their compass pointed in the right direction, the task of eroding to the core begins.
The Visit To The Macedonian Fake News Factory
A case in point is that of Macedonia, the surprising haven for the pedlars of fake news that had held such a sway over the 2016 presidential elections in the U.S.
Green and her team recently visited Macedonia to meet with some of the now-prosperous creators of fake news, with the goal of understanding the business model of fake news dissemination.
“[The problem of fake news] starts off in a way that algorithms should be able to detect”, said Green.
With the insight they gained from the visit, Green’s team wants to tweak with Google’s (and Alphabet’s) existing infrastructure, such that it detects the red flags.
Green’s team is working on creating algorithms that would identify the process with which fake news is disseminated, and then be be able to then disrupt it.
The team learnt that the content farms that disseminate fake news utilize social media and online advertising – the same things that legit online media and publishers use. The key then lies in the algorithm being able to detect the difference between what is legit content and what is content with malicious intentions.
Jigsaw is now working on a tool that would be able to do exactly that. The tool could not only be shared across Google, but also across competing platforms like Facebook and Twitter, in a hopefully successful attempt to curb the epidemic that fake news has become.
Tête-à-tête With ex-ISIS Recruits
Along with fake news, Jigsaw is also tasked with combating pro-terror propaganda on the internet, for which they came up with a different, yet an equally effective and innovative solution.
Last year, the team travelled to Iraq to speak directly to ex-ISIS recruits, and what came into existence as a result of that trip was the Redirect Method.
The Redirect Method uses machine learning to detect extremist sympathies based on search patterns and then redirects users to content that is intended to play against their sympathies.
Say, one’s searches show a pattern of sympathy towards the ISIS. Once the pattern has been detected, the next time that user searches for a pro-ISIS video, he will be redirected to videos that show the ugly side of ISIS.
The technique aims to use counternarrative to dimish the allure of extremist ideology. For someone who has already reached the point that they are only two steps away from buying a ticket to Iraq and joining the caliphate, this method might not work. But as far as people who have just started to get curious are concerned, this is a method that could be effective, by appealing to the base instinct of preservation in the human being.
“It’s mostly good people making bad decisions who join violent extremist groups”, Green says. “So the job was: let’s respect that these people are not evil and they are buying into something and lets use the power of targeted advertising to reach them, the people who are sympathetic but not sold”.
Since the launch of the Redirect Method late last year, a total of 300,000 people have watched videos served up by it.
Perspective On Cyber Bullying
Jigsaw’s to-do list is not done yet!
Another task on the list for Jigsaw is to create a tool to target toxic speech in the comment sections on news organizations’ sites.
Jigsaw has come up with a tool which they call Perspective – it is a machine-learning algorithm that uses context and sentiment training to detect potential online harassment, and then reports it to the moderators.
Currently, a beta version of Perspective is being used by the New York Times. If effective, the tool could not only be extended to other news organization, but across platforms encompassing public expression.
The tool is constantly evolving, which has its positives and its negatives. Because of its nature, the tool is open to the risk of being potentially biased against certain words, ideas, or even tones of speech. The potential of the tool can also mean that terrorist regimes, or authoritarian regimes, could tweak it and use it for complete censorship.
Fearing this, and other potential risks, the team has decided not to open up the API to allow others to set the parameters; they will be setting the parameters themselves for now.
“We have to take measures to keep these tools from being misused“, said Green. “Just like the internet itself, which has been used in destructive ways its creators could never have imagined“.
Jigsaw is clearly aware that not all of its solutions will be effective and that some of them could also be potentially misused, but that, for them, is no reason to stop trying.
Half Baked Egg Better Than None?
With the internet evolving the way it is today, something like Jigsaw is no longer a luxury for internet companies. They might not be the only ones to blame for all the issues of concern on the internet, but they are the ones on whose watch the issues are laying their tentacles, and resting peacefully.
While we do know that all social media companies are trying to derive mechanisms to deal with the problems on their platforms, it is refreshing to see Alphabet (née Google) take such a unique, and might I say, devoted approach.
There are several other projects that Jigsaw has running in related fields.
Project Shield, which uses Google’s infrastructure to protect independent news sites that tend to suffer crippling digital attacks when they publish something controversial or that questions powerful institutions.
Another one of their projects, one that is not quite technical, is called Abdullah-X.
It is an animated YouTube series that explores themes of young Muslim identity in society and aims to steer young minds away from extremism. It is one of the things they are using to counter the propaganda from groups like ISIS.
Another important tool has been brewed under the Jigsaw roof is called Investigative Dashboard.
Investigative Dashboard makes public, documents such as financial or property records, searchable. This is integral for journalists investigating money laundering or corruption. It also allows researchers and journalists to work collaboratively, creating a platform for data-driven investigations.
One of the other projects at Jigsaw taps into Google’s technical expertise to provide real-time interactive global maps of cyberattacks and tools for forensic video analysis of violent incidents in war zones!
Phew! I’m sure there’s a lot more that’s up Alphabet’s sleeves – perhaps stuff that doesn’t belong in public domain, but I think the list up-top truly is excellent representation of Alphabet’s intent to fight back – and excellent examples of how technology and Big Data, when leveraged for responsible matters, could change the path of history.
Google has always been a company that has set itself apart from the others. That has reflected in its original motto “Don’t be evil”, which was revised last year, to “Do the right thing“.
For a company that has always proven itself to be more ambitious and more altruistic than the usual profit-focused corporates, Jigsaw is what sets Google apart in the fight for Mankind’s Good, today.
Cyber criminals are paying a lotta heed to your Androids, which translates to some bad news.
Malware affects 9 out of 10 Android devices worldwide.
Thus, we urge you to look into your phone and give it a through check, including reviewing which all apps you’ve installed on your phone, and where you sourced them from (which is a critical element of security).
It’s not even been five full months in this year, and yet, notorious minds have managed to circulate a flashy number of 7,50,000 apps – all aimed to disturb your handsets. This number is set to escalate by the end of the year to a drastic 3 million+ apps!
By which time you would encounter around 8,400 freshly-served malware every day!
The problem that basically underlines this cancer, is the lack of updates.
Android 7 which has been available in the market since August 2016, has reached a mere 4.9% of all Android smartphones.
That’s an important factoid. We looked at the numbers and researched around online, to find the percentage of infected apps, by Android version. Ready?
• Gingerbread (versions 2.3 – 2.3.7): 0.9%
• Ice Cream Sandwich (versions 4.0.3 – 4.0.4): 0.9%
• Jelly Bean (versions 4.1.x – 4.3): 10.1%
• KitKat (version 4.4): 20.0%
• Lollipop (versions 5.0 – 5.1): 32.0%
• Marshmallow (version 6.0): 31.2%
• Nougat (versions 7.0 – 7.1): 4.9%
As you can make out versions 4 thru 6 are the bedrock of vulnerability.
Android gives complete independence to its developers and users to customise the platform according to their requirements. In the same vein, device manufacturers and carriers also have tremendous freedom to develop the ecosystem to suit their needs and preferences.
The big OEMs are also slow in releasing the updates, as they take time to add layers and layers of bloatware in the guise of customizing the OS.
Hence, either the updates provided are very late or they are not provided at all.
And therein, lies the rub.
Bogdan Botezatu, a senior e-threat analyst for security firm BitDefender had forewarned us of this problem in an interview with CNET way back in 2014. Commenting on the increase in the accessibility of malware, he’d said “no coding is required to bind Android apps with malicious programs“, and “people look at phones more like phones, rather than intelligent computers“, that is to say people need to understand that their smart-devices are prone to the same malware grievances if not more like their computers.
Google had in fact, taken a stand on the use of ancient Android versions in the current crop of smart devices. In the interest of culling the fragmentation of Android OS at the manufacturer-level, and to plug the gaps festooning older versions of Android, Google had declared that they would not approve access to Google Apps and the Play Store (which Google presides over more actively), to newly-released Android devices that carried OS versions that were older than the then-current version. Additionally any existing devices that carried an old Android version, nine months after the latest Android OS was released, would also not be welcomed on the Play Store or be able to get Google’s own apps.
Given the increasing usage of Androids in every of walk of life, Security has come to occupy the forefront. It is an issue that needs due attention to make everyday activity safe for Android users. But Google can’t combat this alone.
Has the situation become insurmountable, or is there hope?
Well there is plenty of hope to salvage the situation. It would need a little bit of alertness, intelligence and perseverance on your part, to maintain the safety of your device and restraint in its use. So what’ve you gotta do?
See, that wasn’t so difficult! Stay updated, and stay safe. Please exhibit the same caution as you do with your Debit Card and your personal safety!
For the average Indian, Amazon might be just another retail site she would go to, to order a new pair of shoes. She might find Amazon’s services a little more flattering than those of other e-com sites, and the prices (at times) a little lighter on her wallet.
But when she’s ordering that shoe on the website, what she perhaps does not realize is that Amazon is not just a retail outlet, but it’s actually an empire built on a humongous network of services and products.
From Amazon Web Services, which hosts a considerable chunk of the internet, to cloud computing, to tech research and development, to new product, to a NetFlix competitor on-demand streaming platform – Amazon is most definitely a lot more than just an e-retailer.
While most of this has so far been focused on the West, but from the looks of it, Amazon might be expanding deeper into other markets – especially the Indian market – and not just by the means of retail. It has many, many things up it’s sleeves!
After much ado, Amazon finally received a clearance to operate its e-wallets in India. The global retail giant recently received a Prepaid Payment Instrument (PPI) license from the country’s federal bank.
This implies Amazon’s imminent entry into a market that is highly competitive, and post-demonetization, is growing by leaps and bounds.
The market currently is dominated by Paytm, which has over 200 million users. What makes this even more interesting is the fact that Paytm is backed by Alibaba, a Chinese retail giant that is colloquially referred to as China’s Amazon, and is Amazon’s biggest competition in the South-Asian markets.
Up until now, Amazon operated on the PPI licence issued to reward points management and gift card provider Qwikcilver, in which Amazon had invested USD 10 million in 2014. This dogleg approach limited all that Amazon could offer in India.
But with the new license Amazon should be able to offer more point-of-sale transactions. This could possible change how Amazon has been approaching the e-wallet up until now, where the e-wallet has only been a functionary element, to facilitate the transactions on their retail network. With this new license, they could expand into the e-wallet market in a manner of making e-wallet an integral part of their retail network, and not just an element to add functionality.
“We are pleased to receive our PPI licence from the RBI“, said Sriram Jagannathan, Vice President of Payments at Amazon India. “Our focus is providing customers a convenient and trusted cashless payments experience. RBI is in the process of finalizing the guidelines for PPIs“.
Amongst other significant competition that Amazon can expect will be PhonePe, the e-wallet of Amazon’s biggest Indian competitor Flipkart. Currently, PhonePe accounts for 5% of Flipkart’s transactions. MobiKwik, Oxigen, PayUMoney, M-Pesa, FreeCharge etc., are amongst other popular wallets in the country that Amazon can expect a competition from.
Lately, Amazon has been working with nit and grit to expand into the Indian market, not just by the modes of increased retail, or widened reach, but by the means of different products and services.
One of the products that Amazon has quite been focused on is the Prime Video, its on-demand video streaming platform.
The Indian market has finally been opening up to the idea of TV via the internet. It has not been too long from the time when we used to have dial-up connections, the ones that were so slow that watching a video on it was more like a dream, and it involved endless buffering.
In the last few years, the presence of faster internet has helped get people get used to the idea of streaming things over the internet. And Amazon clearly wants to cash in on the opportunity.
Just last year, NetFlix, the on-demand streaming giant entered the Indian market, and was lapped up by the hitherto-deprived Indian citizenry. This was after HotStar, backed by the hyper-popular Star Network made a place for on-demand streaming in the everyday life of an Indian user. Hotstar basically did so on the back of sports, mostly cricket, until the younger generation in the country discovered that it was easier to watch various seasons of their favorite shows on the app instead of downloading them off of pirating platforms.
Coming back to Amazon – well, the Internet giant seems to finally have its catalog ready for the Indian user.
They’ve been working to bring uniquely-Indian content, for the picky Indian viewer. To be able to do that, they have been working on partnerships within the Indian entertainment industry. Recent notable ones include the exclusive online rights for Kabir Khan’s upcoming title The Forgotten Army, which Amazon will be marketing as an original.
Amazon has also partnered with various stand-up comedians in the country for the rights to stream their content.
This is in addition to the companies having signed deals with Lionsgate and BBC to acquire international titles for the Indian audience.
It partnered with Bollywood star Shah Rukh Khan for exclusive access to all of his Red Chillies Entertainment’s titles last year. The company is also reportedly in talks with Aamir Khan for titles from his production house.
Others include a deal with Paramount for streaming rights of recently released Teenage Mutant Ninja Turtles: Out of the Shadows, Star Trek Beyond, and 10 Cloverfield Lane, in addition to titles from Paramount-owned Transformers, Indiana Jones, Mission: Impossible, Madagascar, Shrek, and Kung Fu Panda franchises.
“India has one of the richest and most vibrant entertainment industries in the world – Amazon is energized by the talent and the passion of India’s film industry and is excited to be making multiple Indian original shows already, with more to come“, said Roy Price, Vice President and Head of Amazon Studios.
Now that Amazon seems to have the content – national and international – sorted, it is the mode of delivery that they are expanding on.
Amazon also recently launched its Fire TV Stick in India at INR 3,999, with additional discounts for already existing Prime subscribers. This is Amazon’s Chromecast rival, which enables you to watch Amazon Prime content not just on your laptop, or mobile, or tablet, but stream it onto your television, and watch it like the good old days – leaning back on the sofa.
The Fire TV Stick also offers a range of additional services, which include built-in apps such as EROS TV, Netflix, Gaana on the device, which are other popular on-demand streaming platforms in the country.
The device will also reportedly support voice-enabled commands. Amazon says that it will understand Hindi dialect and accent swiftly, which is something that could prove instrumental in giving them an edge over the rivals.
The company, in the past, has said that they intend to launch the Fire TV, the full-fledged TV box and other services to India, soon. But they might not be coming as soon as we might like! The Fire TV Stick is certainly an indication to cement Amazon’s intentions in the regard though.
In other news of the company expanding into new forums on an international platform, the company recently acquired a patent for an on-demand clothing manufacturing warehouse. The patent speaks to a new order of clothing retail altogether, where a customer’s clothes will be made only after he has placed the order. This would enable the retailers to offer a lot more customization on their apparel, as well as develop newer options in far more dynamic a manner, depending upon the market demand.
The patent is for a computerized system, which would include textile printers, cutters and an assembly line. It would also enable cameras designed to snap images of garments, which would provide feedback on alterations needed in subsequent items. This would help to increase efficiency since this would enable the goods to be manufactured in batches based on factors such as the customer shipping address, and further customizations.
“Once various textile products are printed, cut and assembled according to the orders, they can be processed through a quality check, photographed for placement in an electronic commerce system, shipped to customers and/or stored in a materials handling facility for order fulfilment”, the patent reads. “By aggregating orders from various geographic locations and coordinating apparel assembly processes on a large scale, the embodiments provide new ways to increase efficiency in apparel manufacturing”.
Amazon had filed for the patent back in 2015. We are not sure of what exactly are they planning to do with this patent in the immediate future, but it clearly is an indication that the e-commerce giant has its sights set on being a giant player in the clothing industry.
Amazon already has quite a clothing and apparel selection that it retails for other brands, in addition to about eight of its own brands, making everything from kids clothes to women’s dresses to dress shirts for men.
What’s more interesting is that such technology could also have applications in footwear, bedding, curtains, towels, and others including but not limited to paper, plastic, leather, rubber and other materials.
Not too long ago, Nike, in the U.S., had running what they called NikeiD, a program to customize sports shoes for their buyers. It allowed the customers to choose shoe type, colours, and the likes, and it took Nike 3 to 5 weeks to deliver on the shoes. With a patent of the kind that Amazon has acquired a process of this kind can be made speedier, and more diverse, making more options available for the end customer.
With innovation in fields as diverse as these, it looks like Amazon is concentrating on making itself a part of the user’s everyday life, in more than one manner. Amazon is trying to be the source for all that one can use in a day, from shopping for a wide range of products, to clothing, to groceries, to using the Amazon e-wallet, to coming back home at the end of the day and having your entertainment needs met by Amazon itself, and all of that possible on a device made by Amazon.
Wonder can lead to wisdom. And the mother of wonder is boredom. At least that is how it appears in the case of Wes Cherry.
One could have never guessed that Solitaire, one of the most prominent features of Windows, was the product of bored hours.
Wes Cherry was an intern in Microsoft back in 1988 when he created his entertaining brainchild, that became so popular internally that it was introduced as a standard feature in Windows 3.0 in 1990.
The official purpose of Solitaire was to teach the people how to use the mouse properly. However, humankind surpassed the era where people needed to be taught how to use the mouse, and yet, Solitaire remains one of the most popular features on Windows.
And why not? It’s one of the easiest ways to waste your time! Ironically, Bill Gates thought that the game was “too hard to win”.
Even in as busy an institution as Microsoft, interns do have hours where they just twiddle their thumbs. In an interview with Great Big Story, Cherry said, “I came up with the idea to write Solitaire for Windows out of boredom, really”. He went on, “There weren’t many games at the time, so we had to make them.”
There was another feature that Cherry had added in the game but was later removed – a boss key. Intended as the saving grace for interns, a fake spreadsheet would pop up on-screen through a simple shortcut.
So you might not be working but then you will not get caught. Obviously, Microsoft chose to discard the feature.
So, Solitaire whose collection reached 100 million unique users in 2016 must have yielded quite a monetary benefit to its creator, right?
Cherry was just an intern then so he did not get a single cent. He jokingly added, “One time I said that if I only got a penny for copy, I would be very rich. So far only 14 people made good on that, I’m still waiting for the rest of you”.
The creator is immersed in his life as a cidery owner while you are trying to get over the fact that he was not paid. Seriously, guys!
Facebook’s Business Model For Messenger Will Not Be Payments And Commerce, After All
In the last couple years, as Facebook has expanded into messenger services in a big way. Adding a lot of features and capabilities to it’s captive Facebook Messenger app, Facebook seems to have a mission chalked out for real-time messaging.
In fact, Facebook, realising the scope for the commerce and payments within the service, introduced person-to-person payments in 2015, and very recently enabled the same in group chats.
The thinking initially was that building on commerce and payments would offer a secondary revenue stream, an addition to the already existing ad revenue. Other messaging apps like Line (primarily in Japan) and WeChat (primarily in China) are already building fast growing models around commerce and payments.
When, back in 2014, Facebook hired David Marcus, the former President of PayPal, speculations seemed vindicated.
Before running the show at PayPal, Marcus had also founded Zong, which processed payments for social games and apps on platforms like that of Facebook’s.
So the expectation, of course, was that he would bring Facebook around, restructure their Messenger in a manner so as to make commerce and payments central to the model.
The underlying belief was that the level of engagement that we see on messenger services of the kind, combined with access to your address book – and eventually your bank account or credit card – could be the gateway to simple, frictionless mobile payments and commerce.
Even though the idea seems far fetched, when you think of it, it really isn’t so.
There are a few successful examples of this: The first and the easiest example is that of Japan’s Line, which is generating money from social gaming and virtual sticker purchases. Then we have WeChat, which, in China, has launched its in-app payments service. Another potential space for a mobile payments interface was through emails, where Square is exploring with its Square Cash.
Quite interesting an example is that of Apple’s App Store supporting iMessage.
iMessage, a free messenger service, quite like Facebook’s own Messenger, and Whatsapp, now has its own app store. It came at a time when the original iOS App Store had become cluttered with out-of-date and abandoned applications – not only giving Apple users a breath of fresh air, but also giving Apple the potential to generate revenue through a space which would otherwise continue to be a free of cost service.
Instead of looking for standalone apps, that stay on homescreens, where they are often forgotten, people generally look for add-ons that enhance their mobile messaging experience. This is precisely the potential that Apple dug into when they launched their iMessage App Store, and this is precisely the potential that Facebook’s Messenger can dig in to. Generally, this has included everything from custom keyboards to apps that make your conversation an enhanced experience by adding flavor and humor to your messages, like apps for sharing GIFs, emoji, stickers, and more.
While all of these are fun add-ons, they are also a revenue generating space, because most of these are paid features.
With Facebook, even more add-on potential existed with bigger things. How about being able to shop through Facebook Messenger?
When Facebook had added its Buy Button to pages, the speculation was that something similar would come to the Messenger as well. What this would have done then is to convert Messenger into a platform to build on commerce, not necessarily through retail, because Facebook planned on being the middle man and only linking users to sources (not being a source itself).
Well, speculations existed manifold, but the truth of the matter is that Facebook has not taken any of these steps yet, and nor do they seem to be keen on taking these steps anytime in the near future.
Facebook is working cautiously – and that makes sense – given how many times Payments has proven to be a lacklustre business model for social media companies.
In the past, Twitter had tried it, to not much uptake, finally they shut down their commerce efforts entirely,
SnapChat’s SnapCash already sounds way to foreign, and Pinterest, another social media platform with huge a commercial potential, is taking the same approach as Messenger – sticking closer to advertising rather than becoming a sales platform.
Commerce has become a better-integrated part of Facebook since Marcus took over, yes, but it has also become more and more clear that Facebook might be depending only on what it knows best – concentrating on advertising as a source for revenue. While Facebook is working on expanding on more options, more add-ons, but these don’t seem to be with the goal of being nurtured into primary revenue sources, at least for now.
This stand became clear when Marcus, in a recent interview said “We’re not going to take cuts of payments. The one thing we traditionally do, and is a decent business for us, is advertising. So we’ll continue focusing on that”.
While Facebook’s advertising features are not new, Messenger is a relatively new space for ads. For now, they offer two types of ads, First you have the News Feed ads that direct people back to the messaging app, and then you have the ads that are put inside the Messenger inbox. Those are the two things that Facebook is sticking to, for now, for revenue.
Their decision to push more into advertising also means that their payments business won’t receive as much attention. Facebook generated USD 753 million from payments last year – mostly from purchases related to desktop games – representing less than 3% of its total revenue. Altogether, their payments business is on a decline; 11% from 2015, and almost 23% from 2014.
On the other hand, their advertising business, which grew 57% in 2016 to almost USD 27 billion, is not only doing too well for itself, yet it is actually keeping the ship afloat at this time.
All of that said, we are not saying that Facebook is not working with commerce and payments based add-ons.
“Advertising is great. It’s a fantastic business”, Marcus said. “You [still] need to enable payments and all that kind of stuff to remove friction from the experience when someone wants to buy something. If you do that, then the value of that conversation for the business increases”.
Qualcomm Sues Apple For Hobbling It's iPhone Chips To Make Intel Look Better
Back in January, we’d covered a lawsuit that had been filed against Qualcomm, by the Federal Trade Commission for unfair trade practices, as well as by Apple and other manufacturers for the inordinate pricing of it’s components.
Well, it looks like Qualcomm is finally geared up to fight back.
Qualcomm recently filed an Answers and Counterclaims suit against Apple. While the suit is a 139-page document, the company has five key complaints.
The main premise of Qualcomm’s suit is that Apple deliberately did not use the full potential of Qualcomm chipsets in it’s iPhone 7 and iPhone 7 Plus smartphones. Qualcomm states that Apple did so, so that the Qualcomm-powered iPhones wouldn’t perform better than the ones powered by Intel’s chips.
Qualcomm says that Apple “chose not to utilize certain high-performance features of the Qualcomm chipsets for the iPhone 7“. They also added that Apple tried to cover how much better the iPhones powered by Qualcomm perform than the ones powered by Intel. They added, “Apple falsely claimed that there was ‘no discernible difference’ between the two variants“.
The company also added that Apple prevented it (Qualcomm) from revealing to customers, the difference in the performance of the two processors. They say that Apple “threatened” them into keeping quiet about the matter, thus preventing Qualcomm from “making any public comparisons about the superior performance of the Qualcomm-powered iPhones“.
Amongst other noteworthy complaints in the countersuit are claims that Apple breached and mischaracterized agreements and negotiations with Qualcomm, that Apple also encouraged attacks on the company in markets outside of the U.S., by misrepresenting facts and making false statements, and that Apple interfered with Qualcomm’s existing agreements with other companies.
Qualcomm’s suit quite obviously comes as a response to Apple’s suit against them from back in January. “Apple could not have built the incredible iPhone franchise that has made it the most profitable company in the world, capturing over 90 percent of smartphone profits, without relying upon Qualcomm’s fundamental cellular technologies“, Qualcomm said. “Now, after a decade of historic growth, Apple refuses to acknowledge the well established and continuing value of those technologies“.
In the last few months, Qualcomm’s journey has been rocky; first, the FTC hit it with a lawsuit, in regards to Qualcomm’s use of its patents: specifically, how it wouldn’t sell modems to companies who didn’t also agree to pay royalties on phones that didn’t use Qualcomm modems. Then came three Apple lawsuits.
The first Apple lawsuit against was filed in USA and claimed USD 1 billion, stating that the chipmaker had been drastically overcharging for the use of patents. Two other Apple suits against the chipmaker were in China and in the U.K., focusing on the patents and the design.
All of this comes at a time that Qualcomm is working on rebranding itself. In the last couple years, Qualcomm has been hailed as the king of the mobile processor industry; most flagships carry Qualcomm chipsets now, and almost every manufacturer has their most prestigious devices running on Qualcomm.
However, they believe that the other hardware they supply for the devices – such as Qualcomm’s RF front-ends, Quick Charge, its digital-to-analog audio converters, Wi-Fi products, touchscreen controllers, and fingerprint readers, as well as the software and drivers used to make all of this stuff work – has been overlooked. To change precisely this Qualcomm recently started a rebranding campaign, ensuring that no one calls their processors “processors” anymore, but “platforms”, being inclusive of all the other products that Qualcomm is supplying for the devices.
This legal battle with Apple, which is certainly going to be a long-drawn one, might cause it to take a hit, at least where the goodwill element of business is concerned. What is noteworthy however is that Qualcomm has tried to keep things running smoothly still supplying Apple with the chipsets, as the two go for it in the courts.
As far as the lawsuits are concerned, it’s on the courts to see how substantial they are. All we can do is speculate if there is any actual reason for the lawsuits to happen, or if it is just two companies going toe-to-toe to pay less and charge more for their intellectual property.
Qualcomm’s CEO, Steve Mollenkopf, believes that “Apple’s complaint contains a lot of assertions, but in the end, this is a commercial dispute over the price of intellectual property. They want to pay less for the fair value that Qualcomm has established in the marketplace for our technology, even though Apple has generated billions in profits from using that technology”.
Qualcomm believes that their patents have “tangibly and meaningfully increased over time” but the company has never raised its royalty rates. “At the end of the day [then], they essentially want to pay less for the technology they’re using. It’s pretty simple“, Qualcomm President, Derek Aberle, added.
“We intend to vigorously defend our business model, and pursue our right to protect and receive fair value for our technological contributions to the industry“, added the chipmaker’s General Counsel, Don Rosenberg.
As far as Apple’s response to the chipmaker’s counterclaims is concerned, Apple recounted its stand from January, stating that “Qualcomm built its business on older, legacy standards but reinforces its dominance through exclusionary tactics and excessive royalties“.
To take a step back and look at this: the facts have not yet been established, and we are not yet sure if this indeed is what Apple is doing. But if it is, then to be honest, for a company that charges quite handsomely for its own products, and defends its intellectual property fiercely, this expectation that other brands not be allowed to do the same, does not set a good example.
As far as Qualcomm’s claim that Apple is sort of propping up Intel’s image, even though Qualcomm platforms perform better, all we can say is that that debate is not much different from Apple vs. Samsung; there are always going to be two teams, and each team is going to believe they are better.
If you’re an Indian, you’d know what Aadhaar is, and if you don’t, then welcome back from your protracted visit to Estonia.
In your absence, the Government spent caboodles of money creating India’s version of a Social Security number; well almost. While the name means “fundamental” or “foundation”, the import of the enrolment is basically to provide a unique identification number for each citizen.
It doesn’t really entitle you to much by way of government aid or medical benefits etc., but it does facilitate a number of important activities in current day India. First and foremost, the Aadhaar card has your unique identification number (yes your Passport, Voter ID, PAN all are unique, but this particular identification has more). It is the first government identification methodology that has your biometric and demographic information.
Biometric information includes your photograph, fingerprint, retina scan and identification marks, while the demographic information includes your residence address, date of birth and even your permanent phone number, to name a few.
The problem is, during it’s infancy itself, the Aadhaar protocol ran into several benefactors, and even more naysayers. The outcome? Contradictions and general insignificance.
Well, the current government is starting to change that.
What seems evident from the current government’s change in stance is that Aadhaar might just become India’s universally accepted identification document across all Government (and government-controlled sectors).
Is that a good thing? Well, read on.
They want to leverage the gargantuan database of over 1.14 billion people who have enrolled with Aadhaar, for various purposes, the implied intents of which are causing some amount of anxiety in the citizenry.
The changing stance is in contravention to earlier (and oft-repeated) assurances and public statements by the government that Aadhaar would not be made mandatory for essential services. Yet, moves like mandating the PAN card to be linked to the Aadhaar for filing tax returns, the re-verification of all phone numbers using Aadhaar by February 2018 as the prerogative of the telecom ministry, and the verification of university degrees using the Aadhaar by the UGC, come in direct contradiction of the Supreme Court’s order of 2015, suggesting that Aadhaar cannot be forced on people.
Feathers were ruffled when a honey trap was spotted.
There were changes to the the National Identification Authority of India Bill (2010) when the Aadhaar Act was introduced in 2016.
The seemingly innocuous changes actually have dubious ramifications.
Initially if you supplied your Aadhaar card as verification document for purchasing a service or an asset, the vendor whose was supplying such service or asset simply received a “Verification Passed” or “Verification Failed” intimation. This binary response carried no additional information towards the citizen or her demographics.
As per the revised Act, such vendors/suppliers can get access to much more information about the customer. In fact, the Unique Identity Authority of India (UIDAI) will share almost all of the customer’s information (thankfully, except your core biometric information, which translates to them not sharing your fingerprint and iris scan records).
Here are some pertinent extracts from the Acts in question. Note the differences in their coinage.
NIDAI 2010: “The Authority shall respond to an authentication query with a positive or negative response or with any other appropriate response ~ excluding any demographic information and biometric information ~ .”
Aadhaar Act 2016: “The Authority shall respond to an authentication query with a positive, negative or any other appropriate response sharing such identity information* excluding any core biometric information.”
So, now, the repository of your information isn’t safe nor private – as the Unique Identity Authority of India is now mandated to share it’s data with various agencies and vendors who can simply request for the additional information, under unspecified and unverified causes.
Am I sensationalising the issue? No!
Once Aadhaar becomes an all-purpose, mandatory identification tool, your life will be very visible to the state. True, there are other ways to get the same information, but not as much as what UIDAI will be able to provide, simply and innocently. True, most of us don’t have a reason to fear such disclosure – but it’s not a matter of fear, its a matter of potential misuse which can’t be traced back to the originator of malicious acts, since your data would be available to innumerable agencies, vendors and unknown opportunists.
However, do not be disheartened – there are some silver linings of this change too.
Aadhaar is now being used by financial institutions as a means for you to pay online (via your registered biometrics), even at simple kirana shops.
You can now geographically transverse the country and your Aadhaar card will get you through most verification instances.
This is also convenient for parents with children who do not have a driving license yet – I can imagine the relief of parents of small children far from their abode of stored documents (like birth certificates) who now have the convenience of the Aadhaar card as a commonly recognised government-issued ID!
There’s more on the anvil.
In the coming months, according to the Civil Aviation Ministry’s report, it intends to make Aadhaar or the Passport mandatory as identity proof for flying domestically.
The reasons for doing so have been to cited as an attempt to cover security concerns across the spectrums – from national security to organized crime.
A primary insinuation has been suggested as the beginning of the creation of a no-fly list out of data received from these ID verifications and segregating offenders into four levels of offences ranked in terms of seriousness pertaining to criminal conduct up to instances of misbehavior with airline staff or fellow passengers.
Guruprasad Mohapatra (Chairman, Airports Authority Of India) reportedly said that a concept note on using Aadhaar to identify flyers has to be prepared. “It was felt a joint system be developed that can be replicated by all airports. Wipro has been asked to develop a concept note in this regard after consulting all stakeholders, including the JV airport operators. We are seeing what all airport processes can be made e-enabled“, he said.
He also added that the system would make security checks and airport screening hassle free and without any issues.
So why this article, and why are we concerned?
What seems worrying is that the Government is beginning to lay the tracks to be able to monitor your activities through data received while you are transacting, entering a location or boarding an airplane – thus making our privacy an important issue.
That is one topic, that the Government is clearly not interested in talking about. Nor about the security of that data.
Once Aadhaar becomes an all-purpose identification tool, your life will be completely transparent to the State, and we aren’t used to that. The antics of the United States’ NSA shows the level of intrusion such governmental access can enable. India has an even lower level of scruples when it comes to unscrupulous behaviour.
However, don’t be disheartened, there is another brighter side of this too. A Surveillance State can not only combat terrorism, crime, corruption but can also create databases of people who are habitual offenders and deny them access of services which have been assisting these individuals till now.
As long as the Government does not use it to crush democratic norms, attack civil rights of ordinary people or (especially) target political rivals to their own advantage, the Surveillance State might not be as much of a hassle as we are expecting it to be. Fingers crossed.
Samsung Is Quietly Building An Immense Platform To Connect All It's Wares
Samsung’s been around so long, and done so much in the field of consumer technology that we don’t really need to extoll the place it has made for itself.
The recent Samsung Galaxy S8 launch event however, provided a glimpse into the mega-brand’s long-term strategy, and we thought that you’d want to know about the behemoth’s gameplan.
Over the years Samsung has built an intriguingly wide network of tech consumer products. which includes smartphones, tablets, wearables, PCs, TVs and other such electronics that we end up using through our everyday lives.
Thing is, this array of products places Samsung in a unique position to build the most comprehensive web of connected hardware, of all existing brands and their inventories.
Up until now though, Samsung has not leveraged the power of this unique capability, and have largely focused on building new, independent product lines, and driving them into our homes.
That seems to be changing.
An example of this would be the new Samsung Pass, which moves beyond the simple (though critical) capability of digital payments offered in Samsung Pay, to a complete multi-factor biometric-capable identity and verification solution.
In addition, it also appears to be compatible with the FIDO Alliance standard for the passing of identity credentials between devices and across web services, which can be expected to be a critical capability in the future.
Another example would be that of the Bixby assistant on the Galaxy S8.
It does provide the assistant capabilities quite like the others on the market, but it also has potential ties in with other Samsung hardware.
Basically what we can see in the future is that you would be able to tell your Samsung powered Bixby to control your other Samsung powered devices without the need to go via a hub or ‘Google Home’ kind of middle-gadget.
This approach goes against what has been the traditional thinking within the tech industry so far – to build your own viable network, you must base it on an operating system of your own. Companies like Microsoft, Apple, and Google have successfully turned their OS offerings into platforms and then leveraging them to provide additional revenue-generating services, as well as control who could access the users of their platforms, and how.
There have been companies, like Blackberry, HP and LG (with WebOS), and even Samsung that have failed to replicate this OS-to-platform strategy.
Over the recent years, however, there also have been platforms that have been built without the base of an OS. This includes Amazon, which has built a network on the base of the capabilities of it’s own assistant called Alexa, and Facebook, which has been build on the base of its outreach.
Samsung’s attempts going forward might give these two some significant competition.
As I mentioned earlier, this is not the first time that Samsung is trying something of the kind. They did try to connect their gadgets and build a platform with their OS, Tizen, but that approach was quite similar to what others have been successfully doing.
In an already existing market of OS based platforms, Tizen didn’t quite take off as Samsung would have wanted for it to.
The renewed efforts, with this approach, could be expected to be more fruitful.
One could argue that Apple and Google are doing something similar. They too have a wide array of products, and most their devices are connected to each other, functioning quite like a platform. What would make Samsung different from Apple’s and Google’s platforms would be that unlike iOS and Android being the glue that connects all the products, Samsung’s world would not rely on the OS as being the base of it’s integrated platform.
However, building a platform of this kind cannot be done solely by one company, regardless of how big it is. Even though Samsung’s existing market share gives them a better-than-fighting-chance, they will need to be ready to work proactively with partners, and competitors, to make their connected device platform viable.
Given how Samsung has been working of late, they might already have reconciled to this reality, and seem ready to extend their network to cars. The recent purchase of Harman, a major automotive component supplier, could be the appropriate push into that space.
All said, it’s good to see Samsung make inroads into something that’s been a long-term dream of many a tech enthusiast.
We just hope they realise that TouchWiz either needs to be lent out to pasture, or else, needs a major run-in with some savvy developers, so that it’s fleet-footed and capable enough to be more than just a source of mirth.
Meet A Wearable That's A Lifesaver - Diagnoses Cystic Fibrosis, Well In Advance!
The latest in a string of recent medicine related wearables is a wristband sensor that can diagnose cystic fibrosis – by analysing sweat.
Cystic Fibrosis is a genetic disease that can cause the infection of lungs, resulting over time, in the inability to breathe consistently.
This device has the potential to transform the currently arduous and expensive diagnostics into a common-place self-test. In fact, it can even help establish the drug evaluation for the disease!
How it works is that the device accumulates a person’s sweat and then measure its molecular constituents and then automagically transmits the results to dedicated labs for analysis and diagnostics.
The system actually relies on a two part workflow – flexible sensors and microprocessors stick to the skin, stimulate the sweat glands and collects the necessary samples. It then analyses and detects the presence of different molecules and ions based on their electrical signals.
Say, for example, the sweat has more-than-average amount of chlorine, the more electrical voltage will be generated at the sensor’s surface.
The wearable will work in tandem with a smartphone – to send measurements to the cloud, and to receive a result straight back after review at a specialized center.
This would also mean that there is no longer a need for a team of specialists to conduct the test, nor the need of a fully equipped lab.
Moreover, as mentioned earlier, the prospects of the wearable are not limited to diagnosing and monitoring, but it could also be used to help with drug development and drug personalisation!
This could be a very big step forward for the patients of cystic fibrosis, primarily because the conventional method to diagnose the disease is a bit of a bummer.
The current modus operandi (driven by the lack of sufficient equipment, and environment) requires that patients visit a specialized center and sit still while electrodes stimulate sweat glands in their skin to provide sweat for the test.
This can be extremely tough, even torturous, for kids, in whom the disease is diagnosed most often.
Additionally, this almost-torturous method is unavailable to over half the world’s population, especially in the under-served communities and in out-of-the-way locations.
This new tech could result in the development of a convenient, economical and reliable device that can make the monitoring and assessment far more accessible, real-time and conclusive.
While accumulating enough sweat for a reliable analysis might still be a problem with people who do aren’t physically active enough, or don’t sweat much, the technology on the device is still a definite step forward from the current scenario, as it no longer requires the person to sit still for a long time while the sweat accumulates in the collectors.
The study that led to the development of the device comes from the American Ivy League glory of Stanford University School of Medicine, in collaboration with the University of California, Berkeley.
The coming together of technology and medicine is obviously nothing new – most of the good health and medicine we take for granted today comes solely from innovation and a persistent effort to make underpinning technology better, smaller, cheaper and ever more useful.
However, what is new – or well recent, at least! – is the interest of the Tech giants in medical research, with the aim of devising wearable gadgets that can be used to do things like detect diabetes, or other diseases of the kind.
It’s interesting and heartening to see Techies working so hard to make healthcare easier and leveraging nascent technology like Wearables to manage complex diseases that require constant monitoring, to ensure that symptoms are in check – a process that can otherwise be cumbersome and cost consuming, and hence avoided by most people.
A similar device developed not too long ago was an alcohol-detecting wearable sensor. It was found to accurately measure blood alcohol levels from sweat. The developers even enabled the wearable to transmit the data wirelessly to an accompanying smartphone app.
This device has not been put out there for the masses yet, but it could be instrumental in letting users know when they are over the legal limit of drinking, and perhaps be instrumental in lessening drunken driving on the roads thus saving many-a-life.
A similar large scale use for the cystic fibrosis tech is possible, perhaps in the form of an integrated smartwatch. “In the longer term, we want to integrate it into a smartwatch format for broad population monitoring”, said one of the researchers on the cystic fibrosis device team.
We’ve already seen how successful mass-produced and conveniently priced glucometers have changed lives of diabetic patients – allowing them to live a fuller life, with lesser visits to medical centres, while also being alert and accurate enough to raise alarms when readings are found to be awry. The electrochemical sweat sensor will do the same for those with cysts and fibroids too – and save many more lives too!
There’s no end to the love and respect that man owes technology, and scientists. And this debt will only grow in the coming years!
BlackBerry, Freed Of Smartphones, Shifts Focus, And Regains It's Mojo
It was sad to witness the fall of a once-iconic brand that ruled the entire sphere of smartphones, and the failure of the Priv, BlackBerry’s first Android handset seemed to be the last nail in it’s coffin.
And many blamed BlackBerry for many things, we (at Chip-Monks) viewed BlackBerry struggling for just one reason – being married to it’s past.
Fortunately, BlackBerry is built of sterner stuff, and whatever the hyper-critical twitterati may troll BlackBerry for, there’s one thing it has – a brilliant, thinking, and persistent management team.
Divesting itself of it’s struggling captive operating system was the first strategic shift; but they didn’t stop there – BlackBerry moved to making Android phones, later eating humble pie and making it’s devices more affordable (and hence more attractive, in this world of lesser brands ruling roost).
Well, the road appears smoother now and its share value has actually risen. Profit margins are better, money’s coming, consistently.
Blackberry has surpassed the expectations of many people who had considered company’s future to be doomed.
The hundred dollar question is: what brought about this change?
Other than the pivots I covered earlier, BlackBerry has finally started to function like a software company is supposed to, and the recent rise in its software revenue shows that BlackBerry’s gamble of liberalising it’s platforms, is here to stay.
Blackberry has also shown tremendous improvements in its recently-released software which include software for self driving vehicles (bet you didn’t know that they were in that playpen too)!
Unsurprisingly, the world’s most secure smartphone brand promises to emerge as one of the biggest security software providers in the world of mobile phones.
The company has developed diverse range of security oriented software products that include services like allowing the companies to track their employees’ mobile devices, encryption, and even helps users separate personal data from professional stuff.
Bloomberg’s Intelligence Analyst Matthew Kanterman said it best, when throwing light on BlackBerry’s cost effective policy, stating,”they have taken a lot of cost out of business and are re-investing those proceeds into software”.
He further prophesied that investments in the new product would prove to be the anchor that would help the company to stay on steady ground and, “prevent the latest threats and ultimately in longer term sustain even faster growth”.
There’s more. BlackBerry has moved away from “proprietary china walling” in quite a big way, avowing to outsourcing its device design, production and sales to companies in India, China and Indonesia. Hardware manufacturers in these countries will make the handsets and BlackBerry will collect royalties and also provide those smartphones with it’s software packages.
Learning from it’s past of having it’s potatoes in one tattie scull basket, BlackBerry seems to have zeroed in on the increasing potential being created by the budding ties between automakers and technology companies. Toyota and Microsoft have recently struck a deal and now BlackBerry seems to be trying to edging into this burgeoning space.
The QNX Software business associated with Internet of Things strategy adopted by the company also seems to be a future profit maker with its ability to handle systems such as connectivity, driver assistance etc.
TD Securities expect it to be a key player in the growth of the company.
What’s more, you’d be surprised to hear (I’m sure) that Ford Motor Co. has hired 400 of BlackBerry’s engineers who had earlier worked in its mobility unit – clearly there’s something brewing there too.
So, I wouldn’t be considered too presumptuous to say that John Chen is having one heck of a ride these past few months!
Stay with it, sir – you’re onto many good things. Keep fighting the good fight!
Commission Alleges Qualcomm Kneecapped Samsung's Exynos Chips' Sales
Qualcomm is super, super, super-huge in it’s domain and even bigger in it’s influence over the smartphone industry. However the one thing it is not, is well-reputed.
The brand seems to be egotistic, almost neurotic when it comes to the control it wants to exert over the industry. I think this perhaps stems from being poor self worth.
Given it’s tech prowess, proprietary advancements and innumerable patents in the world of processors, Qualcomm has become the supplier choice for almost every premium brand out there. But… it’s proclivity to demand and enforce self-serving clauses in the agreements has been noticed by Trade Commission and Courts earlier. Now, it’s in a soup again, for the same self-serving and monopolistic restrictions placed within it’s agreements with Samsung.
Qualcomm has been accused by the Korean Fair Trade Commission of illegally blocking Samsung from selling its Exynos SoCs to third party phone manufacturers. However, no direct action is expected from Samsung against its ‘partner’.
Qualcomm and Samsung have had a symbiotic relationship for a couple of years now. This relationship while beneficial to both, has not really been a friendly one for either of the companies. Yet, given the fact that both these legal entities have leverage over each other, the ‘partnership’ shall remain existent until something of major consequence happens.
To understand why such an accusation has been made by the KFTC, acquainting oneself with a brief history about the relationship between both the companies becomes imperative.
Here is the whole timeline of events leading up to the current relationship –
Qualcomm is currently appealing the fine, and it seems unlikely that Samsung will take any direct action against it for the Exynos sales to third party OEMs.
This might however change, if the regulators bring down the 1993 deal, leaving Samsung with the opportunity to sell Exynos processors to other smartphones without the risk of compensating Qualcomm with a high licensing fee.
Samsung might even turn into a strong competitor, on par with MediaTek, given the fact that it could add other components like memory chips and displays to the SoCs, which Qualcomm would not be able to match.
Why wouldn’t Samsung want to take direct action against Qualcomm?
As mentioned before, Qualcomm had agreed to let Samsung use both the Snapdragon (a Qualcomm product) and Exynos (a Samsung product) SoCs in its devices. In case Samsung decides to stop using Snapdragon processors while using only the Exynos processors, Samsung would be costing its foundry its Snapdragon orders. Both, stock and flow of Snapdragon orders, would instigate unnecessary revenue cuts.
Given the fact that Samsung’s growth in mobile devices has been stagnant, this would be a business blunder.
The relationship remains symbiotic between these two companies, but any aggressive move is unlikely to be made by Samsung unless the 1993 patent deal is struck down. On the contrary Qualcomm’s reputation has been declining significantly given the fact that Apple, a longtime customer is suing it too, for lop-sided licensing agreements, along with many other smaller manufacturers.
There’s no other way to say this – Qualcomm needs to get real. The world today doesn’t suffer autocracy too well – and while Qualcomm may be whistling it’s way to the bank for now, however given that Apple, Samsung, MediaTek and Intel are all investing hugely in devising newer (and often better) chips of their own, Qualcomm may just have to use these agreements as packaging paper in a few years. With the Internet of Things well on it’s way, and Automobile Automation being the big ticket for the next decade, this mayn’t be the best time for Qualcomm to play the my-way-or-the-highway card.
It might just find itself on a rather desolate, lonely and barren stretch of road, with no place to go.
China’s growth in almost every field imaginable has been the talk on almost every street everywhere in the world, for a few decades now – first for their manugacturing prowess, then for for their smartphones, and soon… it’ll be for their cutting edge support services.
From mobile payments to artificial intelligence, China is leading the way in developing the underpinnings and technology – and is now sharing its expertise with its neighbouring nations, Girish Ramchandran, TCS’s Asia-Pacific president, reported.
In fact, China’s fast-growing digital technologies have apparently helped the Indian sub-continent digitise its economy as well, says TCS’ report. And China’s nowhere near done yet.
The report says that China will introduce some of its world-class technologies to other economies along the trading routes – “The initiative is a great opportunity for global trade, and to build connectivity“, per Ramachandran.
Chinese technologies are descending in India – starting with the export of Ant Financial Services Group’s wireless payment solutions to Paytm.
The duo signed an agreement back in 2015, to design an Alipay duplicate in India.
And it was an insightful step – given that India does not have a rich credit card dependent culture, and so, with the help of correct technologies and business structures, making a transition from cash to mobile payments would not have been difficult at all.
Paytm today counts for an unbelievable 150 million users in India!
The use of this payment app has helped people transact everywhere, in lieu of cash. It’s even enabled Indians with an access to small loans, via a mere scan of their phones. Now that is progress in the true sense!
“The demonetisation accords with the Indian government’s push to combat the black market, increase transparency and digitise the economy. Clearly China has played a significant role in this“, Ramachandran said.
And it’s not just financial enablement that we’re getting from China. From apps like WeChat to some of most grass-root-enabling artificial intelligence from budding start-ups in China, Ramachandran finds China showing humongous intent towards India.
Tencent Cloud, the cloud services division of Tencent Holdings Limited, has set up an overseas services node in India as well as a data centre in Singapore to ensure “secure and cost efficient” IT infrastructure.
The UCWeb Inc., a secondary company of the Alibaba Group, has announced the launch of a We-Media Reward Plan 2.0 in India to stimulate and sponsor self-publishers and content distributors through the internet.
Thing is, similar to how China entered and steadily conquered the manufacturing industry – making everything from pins and needles upto complex electronics – China won’t stop at India; it clearly has designs on becoming a supra-major in the global high-tech race.
Ramachandran believes that it is about time that China brands it’s technologies efficiently and makes them available for the rest of the world.
“Most of the apps, technologies and services are being used in China only. Turning them into world-renowned brands would pave the way for China’s next phase of growth“, he added.
Thing is, Ramachandran’s not understood one basic tenet of Chinese strategy – which comes from eons of wisdom – China always, always, builds and tests every product in it’s home market, on it’s more forgiving (and more predictable) denizens.
Only when the products (or services) have been tested, refined, retested, improved some more, and irrefutably proven to be market ready, does China export them outside of it’s shores. Not before, and no matter the loss of opportunity in the interim.
I think their maxim is – prestige before profit.
So, it is going to be rather interesting for us to watch if the upcoming China-supported digital growth in India is accepted or rejected by the common mass, and if any other changes will prove to be as revolutionary as Paytm turned out to be.
Time will tell.
New Chefs In The Apple Health Kitchen: Diabetes Specialists
Apple has recently hired a bunch of biomedical engineers as a part of what seems to be a secret mission to fight diabetes. As initially envisioned by late Apple co-founder Steve Jobs, this would be an R&D program to develop sensors to fight diabetes, by monitoring glucose levels.
While the company has for now declined to make a statement in this regard, many people supposedly familiar with the matter have come forward to share their “knowledge”.
The team is said to work at a nondescript office in Palo Alto, California, in close proximity to the Silicon Valley headquarters. While we do not know the details of the project yet, we do believe this is an adventure to create ‘breakthrough’ wearable devices that detect the disease and monitor blood-sugar levels.
The reason that this could prove to be instrumental in the field of medicine is because up until now it is impossible to monitor sugar levels without breaking through the skin. Electronic diabetes detection devices have proven to be lifesavers for the hundreds of millions of people who are affected by the ailment, but all of them require plucking through the skin to get blood, to discern the sugar level.
“There is a cemetery full of efforts to measure glucose in a non-invasive way“, said DexCom chief executive Terrance Gregg, whose firm is known for minimally invasive blood-sugar techniques. “To succeed would require several hundred million dollars or even a billion dollars“.
What Apple has is much more than that, so it may well be investing some of it to solve this biggie.
Reports state that about 30 people are working on this project now, and the project has been in folds for about five years now. Reports also state that the team has been carrying out clinical trials in San Francisco, the results of which have not been revealed yet.
In addition, they have also reportedly hired consultants to look into the rules and regulations around bringing such a product to market.
For those of you who might be a little surprised, Apple, yes, the makers of the iPhone and the iPad, also have a secret workshop that they have had running for a while now. In this R&D workshop, they have been known to work on many non-phone related products, most of which are experimental for now.
This speaks to the larger Silicon Valley trend that Google, Microsoft, Facebook and the likes have also been feeding into, through their R&D divisions. From Artificial Intelligence, to automated cars, to technology that works with medicine – they’ve got a lot going on in their backyards.
The news of the project comes at a time when the line between pharmaceuticals and technology seems to be blurring, and quite fast. While on the one hand, you have scientists detecting rare genetic disorders wth facial recognition technology, on the other you have Elon Musk’s Neuralink that plans to work on the much risky uncharted territory of the brain.
The approach most companies are taking is of combining biology, software, and hardware, to tackle chronic diseases using high-tech devices. This has led to the jump-start of a novel field of medicine called bioelectronics, and it’s gratifying to see that Apple is not the only player in the game on this one.
It was last year that another biggie came into the scene when GlaxoSmithKline Plc and Google’s parent Alphabet Inc. joined hands and unveiled a company aimed at making bioelectronic devices to fight illness by attaching to individual nerves. U.S. biotech firms Setpoint Medical and EnteroMedics have already shown that strides can be made with bioelectronics in treating rheumatoid arthritis and suppressing appetite in the obese. Medtronic Plc., Proteus Digital Technology, Sanofi SA, and Biogen Inc., are others that are playing in the field, trying to make a mark in this extremely interesting field.
Specifically, in the field of diabetes, Virta is a fairly new startup, which is working on tackling type 2 diabetes, to completely cure patients by remotely monitoring behavior. Livongo Health is another startup, which has recently raised about USD 52 million to launch its blood sugar monitoring product. Alphabet too is involved, via it’s subsidiary Verily who’s tried to tackle this big one with a smart contact lens that measures blood glucose levels through the eye, but that has not proven to be quite successful yet.
While we don’t know exactly what the shape of Apple’s project is, for now, yet it does seem to fit into the bigger vision of the company that Steve Jobs famously dreamed. Jobs believed Apple would one day be at the intersection of technology and biology, and making this happen would be a perfect manifestation of the same.
They are already halfway there with the Apple Watch which counts calories, and steps, takes heart rate, and other biological measures. Add this, and voila!
Siri, All Set To Tap Into iMessage And iCloud For The New 2017 iPhone Model
If you’ve been keeping up with the rumours and talks about Apple’s upcoming 2017 iPhone, you’d have read our articles about the new iPhone model’s larger OLED screen or the introduction of Augmented Reality as a prime feature on their next salvo.
But behind all the fuss around both hardware innovations, is a forgotten hero.
The software that’s going to power it all. An upgraded iOS has been released alongside every major iPhone revamp, till date. No one understands the criticality of an improved and energised software platform, better than Apple.
So, expect iOS 11, people. Not only is iOS the primary bond that has retained Apple consumers, and refrained them from shifting to a competing operating system, it has also been the very bedrock of Apple’s own growth and prosperity.
You may not have caught it so far, but patents have recently been awarded to Apple, that primarily focus on a revitalised Virtual Assistant feature – clearly hinting at a significant revamp of the iOS and how it’s next avatar will function.
Well, the patent which is for a “Virtual Assistant In A Communication Session”, lays out the basic fundamentals of the new journey. Siri, will most likely be integrated into iMessage and iCloud – which is a monumental change, much like that of the rumoured AR introduction.
The virtual assistant would be able to respond to queries made inside an iMessage chat. But does that mean that Apple will be listening in on your personal conversations?
iMessage already is end-to-end encrypted and it is highly unlikely that Apple would compromise on user privacy for the sake of bringing Siri to iMessage.
To protect the privacy of its consumers, typically, Apple has made it quite transparent in their patent that members of an iMessage chat would be notified that at least one of them is using the Siri assistant within the chat session. And that the users would be privy to, and would be the authorizing party that would censor what personal data Siri can access.
On top of that, Apple is also planning on allowing Siri to make payments on behalf of the user, by choosing the suitable payment app when the user asks Siri to do so during the iMessage session. Users can currently make PayPal payments using Siri, but not while accessing iMessage. The transaction would have to be authorized using the Touch ID. This peer-to-peer payment system riding on an already end-to-end encrypted messaging session would be an impressive addition to the features already being rumoured for the next iPhone(s).
The extent of Siri’s reach might not just be limited up to the iMessage, but might even gain enough powers over the iCloud to access data from any other Apple device the user owns. Using the Apple ID, the information from the user’s devices would be derived and the necessary action and responses would be offered to devices across the operating system including the Mac, iPhone and even an iPad.
But here is the value judgement that you or any other Apple user/enthusiast should make.
Google has already beaten Apple to the stadium as its Google Allo app already provides similar services. The only difference is the fact that Allo isn’t encrypted simply because features like Google Assistant tap into a user’s data to provide its services and Allo needs to communicate with Google’s servers to cater to all the requirements of the consumers.
The decision will always remain subjective, dependent on the dilemma of choosing between privacy and being the first mover.
Irrespective of that choice, the upcoming iPhone seems to be destined to become an immensely powerful ace – backed by significant changes in hardware, software and the very ecosystem supporting it.
The only thing that might hurt it’s trajectory is if we’ve been hoping too hard, and reading too much into the rumours/conjectures and dreaming up a device that Apple isn’t going to launch come September!
There’s nothing worse than wishes that crash against the rock of reality, is there? And yet, Apple won’t be to blame, because they never said they were going to wow us. We just fervently, hopelessly and oh-so-desperately want them to!
It is a moment that will be recounted in the history of Apple, to mark the landmark when it struck a deal with its arch-rival Samsung, and placed an order for 70 million OLED panels for it’s own prestigious iPhones.
Displays are one of those areas in which the South Korean giant absolutely excels. Although Samsung has been using curved displays in its own smartphones, this will be the first time that Apple would use this technology in its leading device, that too from a supplier that most of the world had assumed Apple was distancing itself from.
No doubt that iPhone lovers would already be drooling over the latest teaser video of the purported iPhone 8. This much-awaited device is expected to be launched in the wake of Apple’s tenth anniversary of the iPhone – an occasion that in itself, calls for something new and momentous.
As we’d reported as far back as February, this is the first time Apple is going to use Organic Light Emitting Diode (OLED) displays. These types of displays don’t need a backlight like the LED panels in all other iPhones have had – implying that this version of the iPhone is going to be ultra thin!
LG has already shown the flexibility and versatility offered by this technology and now Apple seems to getting on the train.
As per Nikkei Asian Review, Samsung’s stock made a giant leap when the news of Apple’s 70 million display panels’ order surfaced.
What’s especially intriguing is that the order is said to be for bendable OLED which has the tech world abuzz with the prospect of users actually being able to bend the new phone’s screen. We’re not too sure of that though – it perhaps refers to the curved edges of the screen, and not the actual ability to bend. Let’s park that for the moment – we’ll check around and circle back to this point in a. subsequent article, once we have some better verified sources validate it.
It is expected that Apple would launch three new iPhones this year of which the Anniversary Special iPhone 8 will have the curved 5.2 inch OLED screen while the other two variants will have the usual LCD display. I don’t think this has only to do with snob value – we believe that all of Apple’s suppliers put together may not be able to pull off sufficient units of this special glass in time for the launch, given that the curved version of OLED screens are hard to get right each time, and defect rates are much higher than those of flat screens.
Given the colossal order it will not be wrong to assume that Apple is expecting crazier than crazy demands with this launch – anniversary and the fact that the current form factor is now three years boring… umm I meant, three years old.
Although Neil Shah, Research director of Device and Ecosystem at Counterpoint Research has his share of doubts saying, ”seventy million units of the OLED phones is too high for me at this point”.
All I’d say is – history is replete with evidence that no one knows the demand for iPhones – not even Apple else they’d not run short of production capability after every launch, causing 6 week-long backorders.
And if there’s anything that consultants have underestimated (as well as other manufacturers), it’s Apple’s mystifyingly effective marketing machinery – that whips up all manners of storms and desires within even the most elusive of customers.
Plus the fact that the OLED screen is not the only USP of the upcoming iPhone 8 – far from it! The new handsets are expected to offer amazing features like a front facing camera with 3D sensors, wireless charging and much more – because, Apple desperately needs to keep up with the competition – most of whom already have most of these features.
If you are already coveting this new device and thinking of the ways to arrange money to buy it then, sorry to break your bubble, selling your kidney might just not suffice this time. The special edition iPhone with OLED display is undoubtedly going to be the most expensive iPhone ever.
Brace for it!
The Fall Of The 32-bit iOS Has Some Drastic Changes In Store For Us
With Fall around the corner, anticipation is running wild, for Apple to make “revolutionary” changes to their iPhone.
But one of the things that Apple may spring on the world, is the expulsion of all 32-bit apps from it’s App Store. This really could change the face of their hardware and software entirely!
To be honest, much like every other tech upgrade in the pipeline, this has been coming for a while. All apps and updates submitted for the App Store’s approval since mid-2015 are required to incorporate a 64-bit support system, instead of only a 32-bit one, and that’s indication enough.
We can soon expect that Apple will remove the support for the 32-bit system from their device entirely, virtually killing it. The 32-bit at the moment is an aging cow, and in the tech world, aging cows are put down pretty quick.
This is a unique and interesting technical achievement, of course, but it is also kind of a cleaning of the house. From the early days of the smartphone and App Store, a lot has gotten accumulated, and a considerable chunk of that is neither maintained, nor used by people anymore; of course, people move on from one app to another, and the older ones just keep lingering in the background.
This switchover would mean that the App Store would automatically flush out the apps that do not have 64-bit support, as a consequence of being either too old, or having not been maintained.
For those who don’t quite understand what is going on, let us plot a timeline:
Back in September 2013, Apple introduced the iPhone 5S. The device came with the then-new and advanced A7 chip, and an upgrade to 64-bit system. This was the first device that came with a 64-bit version of the iOS.
The iPad Air that followed in the month after, followed on the same path. The iOS upgrade was mostly functioning well, except for certain memory associated glitches which were subsequently taken care off in March via the iOS 7.1 upgrade.
It was clear that Apple wasn’t going back.
The iPhone 5C was practically the last phone to house the 32-bit chip. From the cousin family of the iPads, the original iPad Mini was the last one with the 32-bit system.
This year, Apple released the iOS 10.3, which was basically like a sounding alarm for the death of the 32-bit version, because it came with a list of all the installed 32-bit apps that would not be supported in the future iOS versions.
So, let me add a few predictions to what is coming next.
Well, for starters, we can expect a first look at the iOS 11 in the coming few months. This would quite include the dropping of support for the 32-bit system, and devices like iPhone 5, 5C and iPad Mini will become obsolete.
The update can be expected to roll out sometime in September, and these devices will no longer have support since Apple will move over everything to the 64-bit system.
Same would go for apps that run on 32-bit system only.
This would present a unique opportunity for Apple to use its control over its software to streamline its hardware proffers. The 64-bit ARM instruction set, also known as the AArch64 is rather unique and different from its predecessor- 32-bit system, known as AArch32.
While using it on the PC, the x86-64 instruction set is an extension of the 32-bit and 16-bit instruction sets, which gave it an upper hand over Intel’s 64-bit-only Itanium architecture. However, even today, every x86 PC supports a 32- and 16-bit code. Apple could possibly be the first company to build an ARM CPU architecture that solely supports the 64-bit code.
This would also mean that a significant amount of space could be freed for the hypothetical A11 SoC for more CPU cores, larger CPU cores, or even a better GPU…
However, to maintain maximum compatibility and flexibility, it is very unlikely that ARM will ship anything which does not support a 32-bit system in the near future. So, the predictions will take a while to actually materialize into policy in the devices.
Another indication of this would be that Windows, macOS, Linux and other Operating Systems still have a functioning 32-bit system within a 64-bit support system. So, the elimination of the 32-bit system will be a first for a mass-market consumer operating system: not only has iOS transitioned from 32-bit to 64-bit, but it will soon completely end 32-bit backward compatibility altogether. So the elimination of the 32-bit system would be considered a milestone step by the company.
For now, Apple has not confirmed any of the above; not confirming suppositions has been one of Apple’s policies throughout the time. So, all the above are only predictions, based on how we can see this might play out.
The fact that remains is that only Apple has enough control over its hardware and software to realize benefits of the kind and that, in itself is significant.
Facebook has taken over such a big part of our lives today that if we don’t check our News Feed at least six times a day, we feel as if we’ve had an incomplete day. Add WhatsApp, Instagram, Snapchat and all the other social media apps into the mix, and the whole day goes by with us still in our little cocoons.
Facebook Stories is the latest attempt at integrating us deeper into this world of the visual. Casually cast on us by Facebook as a ‘new feature’, it’s actually a careless replica of Snapchat.
But why is Facebook so determined to duplicate Snapchat into all of its products? A question the entire world is asking while cocking a snort at Facebook for it’s snarky be-everywhere greed.
Well, I believe there is a simple answer to this. Snapchat did something truly innovative with its Stories in 2013 that gave it a huge push, and Facebook felt threatened. They’d already failed at acquiring Snapchat itself, so much like a lover scorned, they decided to copy Snapchat instead of being left out – just to stay up with the Joneses.
First they introduced Instagram stories in the year 2016. I guess we can give them points for that move – after all, it did prove to be a huge success for the company. They even managed to bring a lot of people over from Snapchat to Instagram.
But they took this competition thing a little too far when they did the same to WhatsApp in February this year. Our good old WhatsApp – a sanctuary from the noise of most other social media apps such as Instagram, Twitter and Snapchat – also became a victim to this foregrounding of the visual. I know I wasn’t alone in feeling annoyed at the removal of the old, friendly, text-based status feature. Thankfully, they heard our protestations and moans, and brought the written status feature back in the end.
The ultimate overkill, however, was when they rolled out the Stories feature in their own (i.e. Facebook) mobile app. It appears right on top of the News Feed, along with a newly updated camera. There is also ‘Direct’, that allows you to share stories privately with a friend.
There may be a few twists, but we see it for exactly what it is – a replica of Snapchat Stories.
Now, Facebook undoubtedly, has a wider reach than Snapchat or Instagram or any other social media app for that matter. It is used by people from all age groups, while the other apps cater mostly to a younger generation. So the feature will be new for a lot of its users. Who knows, it might even be successful…
Nevertheless, this excess of Facebook doesn’t sit well with most people. Twitter has grabbed this opportunity to make fun of Facebook and has come up with hilarious memes saying that even a potato and medieval manuscripts will now have stories.
We get it, Facebook – the world is moving on from the written to the visual. But you can just as easily gain an edge by doing something different. Stories are not the only way to entice your users, are they? And copying your competitors to such an extreme is definitely not the solution to adapt to this changing world.
Remember, innovation is always better than replication.
eBay is one of the founding pillars of the startup – and it thus knows the criticality of pivoting at the right time, and the harmlessness of swallowing one’s ego for larger gain.
The multinational e-commerce megabrand that facilitates online consumer-to-consumer and business-to-consumer sales, sold its India business to Flipkart – an e-commerce player that has time and again proved to be the bigger of the two, in the Indian market.
Despite the drubbing, in a demonstration that shareholder value superceded any such emotional bondage, like “hard feelings”. the sale happened as a part of a larger, USD 1.4 billion deal.
The deal includes multiple funding rounds, starting with the first purse of USD 500 million that eBay is currently putting into Flipkart. The deal also includes an exclusive arrangement in which eBay merchants outside of India will be able to sell to Flipkart shoppers and even more interestingly, Flipkart sellers can sell to eBay shoppers outside of India.
Even though rumors about the deal had been making rounds for a little while now, it’s official mention did come as a surprise to most of the world. Primarily because eBay also owns 5% stake in Snapdeal, which is a competitor to the both, Flipkart and eBay India.
Intriguing, this move might just be part of eBay’s larger plan to line up all of it’s resources against global arch-rival, Amazon. Clearly they’ve been reading Sun Tzu’s Art Of War and are embarking on a long-standing desire to challenge Amazon, by strengthening the biggest of it’s competitors.
So, the decision to wind up their own India business, and the decision to invest in a rival to Snapdeal, a company they already have stakes in, to then join hands with Flipkart… where exactly is it coming from?
“The process started almost a year ago with us looking at the Indian market and seeing who was winning and who was losing and what the next few years were going to be like. We were evaluating both eBay India and Snapdeal“, said Devin Wenig, eBay’s CEO.
Well, the Indian market is indeed a very strong commerce market. There is growing wealth, tech adoption increasing and getting better by the day, too much demand for goods and thus a manifest supply-demand imbalance; thus making the market quite a dynamic place for any company to be in.
However, what is also true is that in the last couple years, there has been a boom of sorts in the Indian e-commerce market; the market has now become “overheated”, if you will.
Simple economics explain the tug-of-war. There are far too many companies in the market right now, and there is thus over-investment of funding, to a large degree; a lot of the companies are running on their funding and not on profits, and that makes for a bad dynamic for any market.
So, something had to happen. Consolidation is the best of those “some” things; a few players joining hands to stand against the bigger one.
“Flipkart had a very strong close to last year and they are starting to pull away. So if we are serious about the market, I want to invest in — and be partners with — those that are going to win,” said Wenig.
“…there weren’t going to be 10 winners, but maybe only one or two. And Flipkart — given all of that — was the natural party to align with“.
As far as going up against Amazon is concerned, eBay has been doing that for years now, in the international market.
The two e-commerce giants started going toe to toe first in the American market, back when the e-commerce boom had just started.
Today, in the India market, Amazon is the bigger player, even though it has been around for only about five years; its competitor Flipkart has been around for twice as long. And eBay, who came to India before either one of these was even born, today seems to have receded to the back of a dark, forgotten alley of the Indian marketplace.
“Putting the eBay India business with Flipkart will make it a bigger business, they’ll co-populate their inventory, share their buyers and just have more scale,” said Wenig.
That is true, two giants sharing resources are always better than one. But what’s in it for Flipkart, in addition to eliminating competition?
Well, Flipkart’s war chest has been drying up slowly over the years. This deal gives them more investment, for starters, and that too quite a big one.
In addition to the obvious, it also gives Flipkart a candy for their sellers – their sellers suddenly and unexpectedly have access to the global markets, and can now export inventory around the world. And Flipkart gets rich without doing a thing!
As for the interest in Snapdeal is concerned, Wenig said: “I would say we still own five percent of Snapdeal and it’s not like we’re giving up on them“. That said, Snapdeal’s on the market right now, looking to be bought out in entirety. The word on that is not confirmed yet, but it does seem like their time in the market is limited.
So, eBay’s seemingly conflicted interests may not remain so for too long.
India Needs To Befriend Electric Vehicles, And Ola's Going To Try And Convince You
India’s electric car market just does not exist.
Sure, electric cars have begun domestic use in many countries, but Indian citizens still seem wary of these ‘hybrid’ or all-electric cars for a variety of reasons:
All of the above combine to make an electric vehicle more of an impediment than a vehicle or a mode of transport, for most people.
Thus, in order to establish and demonstrate the viability of electric vehicles, the government is left only with the option of implementing it in public transport – via e-rickshaws, cabs and buses (which would need to be supported by state and local governments along with subsidies from the central government).
But there are problems in that segment too. Most commercial vehicle owners would not make the jump voluntarily –
All said, the Electric bug has not bitten India, consequently India has not been able to impress any domestic or international investors, due to extremely poor demand for the White Elephant product. Thus, there aren’t too many options either – not too many brands make electric vehicles – hence the chicken-and-egg causation continues.
But governments are a persistent creed. They still have a mandate to achieve – to ‘Go Clean’ (energy) by 2030.
Seeing the poor sentiment from consumers, maybe trying another tack like the adoption of electric vehicles in cab services could change the perspective and demand in the market. To that end, they’re waving some carrots at cab operators.
Ola, India’s largest indigenous cab service is poised to introduce at least one million electric cars in a 5 year timeframe – the first of which is scheduled to be introduced in Nagpur, very soon.
Given that the company is fighting an intense domestic battle with American cab-hailer Uber, it is undoubtedly using the sops and subsidies provided by the government to be prop up it’s balance sheet.
I sense caution in their countenance.
Ola is backed by Japanese giant, the SoftBank Group. Now, while its representative, Masayoshi Son had suggested in December that Ola would introduce around a million cars which ran on batteries charged by electricity in the next 5 years, yet Ola’s CEO and Co-Founder, Bhavesh Aggarwal at another time said that while the prospect would be followed up on, they would be cautious with the spontaneity of the introduction, solely dependent on the Indian consumer’s requirements.
However, this cautious approach might be side tracked if Ola could find support and partnership from the Central Government, as it did with e-Rickshaws. While that partnership did not quite result in long term sales for Ola, but it surely provided a platform for electric vehicles market in India, and was noticed for it, far and wide.
Reading between the lines, I can only summarise with a supposition. Ola’s management doesn’t mind experimenting, but it definitely does not want to commit profitability (and thus, viability) at the behest of a government program.
And it seems I correct in my premise.
Aggarwal, in a media conference was reported saying “Electric vehicles could transform transportation completely in India by the methods of lowering the cost of operation and ownership”.
If that is not cautious optimism, I don’t know what is.
And I cannot fault the man. It’s good to be cautious at a time that you’re draining money on the one hand, trying to win a war. You must learn from other leaders – like Tesla.
Tesla’s repeated rejection to set up a plant in India summarises the plight of the domestic market.
Despite “experts” suggesting that Tesla is going to set up a plant in India “soon”, the company itself has repeatedly denied any near-term plans for a plant in India.
This, despite visits to it’s California plant by India’s Prime Minister and our Transport Minister during their own trips to the U.S., and assurances of land being provided near a port for prompt exports.
Despite all this wooing and sops being offered on a platter, the world’s most renowned electric vehicle manufacturer has still not budged. Clearly, Tesla believes that the electric revolution is still some time away for India.
Tesla however has opened up to providing a pan-India supercharger network in India, which is a positive sign if not the sign the market was hoping for.
The Government’s plans
The Government has a very ambitious plan (see, I can be diplomatic…) of making India a “100% electric vehicle nation” by 2030. It wants to see 6 million electric- and hybrid vehicles on the roads by 2020 under the National Electric Mobility Mission Plan (NEMMP) and Faster Adoption and Manufacturing of Hybrid and Electric Vehicles (FAME). Ahem.
The automobile industry in India is currently a $74 billion industry and is likely to be the third largest automobile industry by the end of this fiscal year. So there’s plenty of money on the table.
Starting with a large-scale cab company like Ola makes immense sense, as Ola’s experience and fillip to profitability might also motivate other cab service providers to start using electric cars and finally, domestic consumers.
There’s some time to go before we see the climax to this particular story, but we really hope that India does ‘Go Clean’.
Augmented Reality Seems To Be At The Forefront Of The 2017 iPhone’s Skills
Given the fact that Apple very recently acquired key patents related to Augmented Reality, specifically on advanced facial recognition, it isn’t surprising to come across rumours of AR being an essential part of the 2017 iPhone – widely coined as the iPhone 8.
Apple’s stocks traded high, the day the news of the patents was disclosed. Adding to the conjecture pool, are indications from several key Apple suppliers and even industry analysts who have indirectly indicated AR might become the prime focus of the upcoming iPhone 8.
What is even more exciting is the fact that very credible sources have come out with lengthy editorials discussing different ways augmented reality could be used in the smartphone Industry.
Picking some leaves from those conjectures, we decided to look at some of the different combination of features than Apple could incorporate in the new iPhone.
While the above features, if added, would serve different services, but they are all intertwined. The performance would have to be spot on… which Apple is very capable of ensuring (Maps fiasco aside). To enable AR though, a larger display screen is a prerequisite. So we may finally be seeing an iPhone with a screen larger than 5.5 inches!
Apple has reportedly invested a thousand personnel into driving their Augmented Reality program which is based out of Israel! Clearly, they have a plan in mind.
Separate rumours have suggested that Apple is working on an AR headset which could ship in 2017 or 2018. The concept is suggested to be like that of the Google Glass but would surely see a significant spin.
Apple has repeatedly been seen more interested (and invested) in the Augmented Reality program than a VR program. The principle reason driving that focus, has been the reduction of hardware and inclusion of more software.
Similarly, the assumption that Apple would want the iPhone 8 to stay a “premium” product that could be utilized for daily usage, and “keep people in the real world, while enhancing their experience” is another reason. These premium upgrades and introductions are set to test the customers with an price tag in excess of USD 1000. I seriously doubt that, though.
I’d eat humble pie any day of any week, but my staunch belief is that Apple knows the limits of customer-spend better than any one else, and the iPad Pro (12.9) notwithstanding, each of their products have been priced exquisitely – garnering the maximum sales they possibly can in almost every economy they’ve entered. Going above USD 1,000 would raise more eyebrows and cause more finger-waggling than Apple prefers to stoke.
Internet Of Things: A Revolution That's Already Begun.
The typical muse of science fiction in 70s, was a hyperconnected world in which humans were able to integrate seamlessly into a network that was both sentient and infinitely collaborative.
Philip K Dick, in his book, “The Minority Report” conjured a world where the individual was connected to a neutral network of infinitely spread out informative nodes – so much so, that the network became predictive to an extent that it could foretell crime.
While I believe that we’re still a distance from when an information umbrella could become sentient, but we’ve definitely arrived at a phase where we can seamlessly integrate not just our thoughts and actions, but our devices too.
In fact, as I write this, there is a borderless, democratic world of information unfolding – one that has devices as its members, instead of humans.
Everything, from your refrigerator to your generator, to your car is both sending and receiving information in real-time; forming a reservoir of information and crunchable data that can be both, used to enrich our daily experience or predict it.
Internet Of Things (IoT) has been forming an indelible imprint on things around us.
According to a research done by Cisco, the number of devices connected to IoT will reach 50 billion by 2020.
Crunch the statistics and you’d understand the gravity of it. The individual imprint would be approximately 6.58 devices per person!
Research believes that the integration process is still not over. It is now, in the last 3 years of the decade, when the cost of having a connection to IoT will drop, it is then that almost 50% of our devices shall be added.
In Munich, Germany, where the Genius of Things Summit was recently held, IBM showcased its IoT ecosystem that is supposed to have infinite potential in terms of changing how we live.
Here are some of the ways in which IoT can be a major game changer.
Remember Iron Man 3? Tony Stark operates a fully functional suit without being in it. The suit in question exchanged real-time information with it’s base station and even made changes on the fly!
Digital Twin is based on a similar concept. Simply put, it is an alternative to the traditional method of manufacturing which follows the steps of conceptualizing, designing and then trying to develop the project in real time. These type of processes are both research-intensive and must be subjected to a trial-and-error type of manufacturing.
Unlike traditional methods, digital twinning works on a cloud-based virtual image that is fed real-time information and is accessible to everyone working in the process.
This allows them a collaborative method of error detection and removal, while also increasing the efficiency of the process manifold.
The goal of Cognitive Commerce is to offer a permeable network of impeccable security, where the customer can engage in financial and personal transactions. A wide range of technology- involving both, security and accessibility, is used to enable this.
In fact, one of the major enablers is the need to build a sort-of-sentient ecosystem, where the network can interact with the individual and offer a solution without the need of a sting-puller.
Predictive maintenance basically assesses the limits of machine endurance and uses the data being generated by the machine itself, to predict the need of repair and maintenance.
For example SNCF, a freight and passenger transport service has been using IBM services in order to maintain the need of trains and rail tracks in terms of repairability.
All these features increasingly show our reliance on a supposedly uncontrollable world of information. Uncontrollable not because it can be sentient (although, it is one of the major concerns of skeptics), but because the information reservoir can be very vast.
But… in the name of progress, we must move.
As we’d hinted, Reliance Jio yet again tried to stand itself on the shaky legs of freebies, with it’s Jio Summer Surprise Offer. But TRAI has instructed it to withdraw the offer, as well as the additional benefits it gave.
There is a catch in that as well – those who have already subscribed for the offer still get to keep it.
The Jio Summer Surprise Offer was basically an extension of the Happy New Year Offer – offering three more months of freebies.
Users could get unlimited mobile data (with 1 GB of Mobile Data, everyday) for free, 100 SMS’ per day, as well as free access to Jio’s suite of apps, such as Jio Cinema and Jio Music – all for three more months.
The catch, this time, however was that this was only available for Jio Prime members, who purchased the 303 (or higher) prepaid recharge or enrolled in the 303 (or higher) postpaid plan.
Perhaps struggling with the low climb-on rate for their Prime membership, Jio had also extended the registration date for their Prime offer by 15 days. They were calling it a grace period, to supposedly allow those who haven’t been able to register for the Prime offer yet.
The move to extend the deadline, was not a shocker, to anyone. Given how Jio Prime seemed to be struggling to garner the same response as Jio itself did initially, it was quite expected.
In a rare execution of power and an even more gratifying industry-first stand, the TRAI asked Jio to withdraw the offer on the grounds that it was no longer feasible for the telecom industry to afford the kind of practices that Jio has been upto.
The Authority observed that any fall in industry revenue would negatively impact investment and loan repayment capacity, which may result in defaults on loans and spectrum purchase charges owed by operators to the government.
“Today, the Telecom Regulatory Authority of India (TRAI) has advised Jio to withdraw the three months’ complimentary benefits of Jio Summer Surprise”, the company said in a statement. “Jio accepts this decision. Jio is in the process of fully complying with the regulator’s advice, and will be withdrawing the three months’ complimentary benefits of Jio Summer Surprise as soon as operationally feasible, over the next few days”.
So, no more people can register for the Jio Summer Surprise offer, nor for the Prime membership. But those who have already subscribed to it before the mandated withdrawal will still be able to continue on the offer and enjoy it’s promised benefits and discounts. “All customers who subscribed before discontinuation will remain eligible for the offer,” says the Jio announcement.
Moves like these, however, begged the question of how long did Jio plan to run itself on the backbone of freebies?!
Had Jio not extended the Happy New offer, in addition to announcing the Prime freebies, we perhaps would have been less critical of them. But this extension, that got quashed only after TRAI said so, seemed to drive home the matter that Chip-Monks has been stating for a while now – Jio is dangling freebies as candy for people.
It’s claiming that the candy is not free anymore, but the truth is that it’s just dirt cheap. Paying INR 303 for the minimum recharge, plus INR 99 for Prime membership, to get three months of everything-unlimited – Data, calling and texting, in exchange for INR 400, is the definition of dirt cheap candy.
Yes, people would have signed up for it, but not necessarily because they loved, or appreciated Jio’s services. It would have been more because they would’ve found it really pleasant on their pockets, and who in the world minds cheap internet, even if it is not the best?
However, we must give Jio some credit for overhauling the Indian telecom market, with the scramble it unleashed, from which Indian customers certainly benefited.
The question though, remains. For how long will Jio rely on dangling candy?
At the end of the day, there’s something that Jio’s not getting – the network sucks, calls don’t connect, calls are impeded by choppy connectivity so much so, one needs to hang up the call, fish out the other phone, and resume the conversation. If anything, it’s made a lot of us value our existing (primary) telecom operators a wee bit more!
Jio: How about investing some of that candy back into sweetening the services and the service – proving a wholesome meal that nourishes, rather than just offering an after-dinner mint? Or worse, be relegated to being just a topic of conversation around the dinner table?
Reason For Despair: Trump Repeals Internet Privacy Rules.
A government needs both shepherds and butchers. Voltaire was clairvoyant enough to say it. And we were foolish enough to ignore it! In our search for better government, we chose someone that we didn’t fully understand.
And that choice will now haunt us in post-script, and for posterity.
Last year, the Obama Administration had set up certain rules of the game regarding internet services. According to it, Internet Service Providers (ISP) couldn’t sell the browsing history of their consumers without asking for their consent. There could be myriad reasons for selling the browser history of the user – be it as raw data for advertising or for sinister purposes as surveillance.
The Obama Administration had laid a strong line regarding that right, and thus, users’ rights and privacy were relatively safe.
But on Monday, all sense of security came crashing down as United States’ new President, Donald Trump, signed the legislation, rendering all those rules as null and void.
The reasons were given for such a move sound mostly pro-capitalistic at best. One of the major reasons given is that the Obama laws had given the power a “select few” sections of the market. The Trump Government has put forth its view of making the competition even.
Trump signed the repealing of the law that was after the Senate and House voted to nullify the rules of the prior establishment.
Democrats had repeatedly urged the presidency to exercise his conscience and veto the appeal. But to no avail. Sean Spicer, the Press Secretary to the White House issued a statement saying, “The White House supports Congress using its authority under the Congressional Review Act to roll back last year’s FCC rules on broadband regulation.”
None of this is good for anyone who values his privacy. In the absence of such rules, ISP providers are free to do whatever they want with their users’ browser histories.
Contrary to popular belief, there are some companies who claim that they want to respect the user privacy. Sonic, a California ISP with a user base of 100,000 has promised to its consumers that it won’t ever fiddle with their browser history. They have been joined by other ISP’s like Monkeybrains, a San Francisco-based provider with around 9,000 subscribers.
Promises like these would come. In fact, we can predict that there would be a pool of ISP’s now who would thrive on the promise of privacy, irrespective of the user experience.
”The nature of the promises are that they remain immune to changing”. But this was said by the sinister Frank Underwood, so we can’t take this at face value.
Also, we would like to make it clear that the repealing does not mean that your browser history is instantly for sale, but it doesn’t say that the companies cannot do it either. Moreover, the new administration considers the user experience as just one of the ways to make profits.
So, we guess it won’t be long before Net Neutrality is also at stake, at the altar of capitalism.
The worst part is that this will also help companies shielding themselves away from any responsibilities. Chris Lewis, VP of consumer advocacy group Public Knowledge lamented that the new developments “eliminates the requirement that broadband providers notify their customers of any hacking or security breaches“.
So yeah, huddle up.
Alphabet Inc., Google’s parent company, and the owner of YouTube, recently announced that it is introducing a new system that will let outside firms verify advertisements’ quality standards on YouTube.
Coming as what Alphabet hopes, will be the remedy to the huge advertising boycott YouTube has been up against lately, this change also embraces wider definitions of “offensive content”.
Over the last few weeks, a whole bunch of companies including AT&T, Verizon, Enterprise and even the British government have pulled their ads off of the YouTube platform following the British government’s vociferous objection to one of their ads being played on top of an extremist video that featured highly provocative and offensive content.
The British Government’s reaction and consternation placed the spotlight on YouTube’s current policies that stated that all ads YouTube carries were overlaid atop videos basis the amount of viewership of the content, but did not consider the content of the video itself; nor did it (YouTube’s algorithm) look for any parity between the video’s content and the ad itself.
YouTube did have some checks in place to ensure sensitive content was flagged, but the definition of “sensitive” or “offensive” that YouTube used so far was very loose and half-hearted.
The cause and the effect combined to made the situation extremely problematic for the brands, YouTube, and Alphabet itself.
Why? Well, YouTube’s erstwhile policies and methods basically meant that videos supporting terrorism, extremism and such morally offensive subjects had ads running atop them from brands of every nature, who had absolutely no support or allegiance to said videos. In fact, none of them would’ve really known of this disparity either, given the randomness driven by the automated algorithm that places ads on the platform.
Miffed and offended, many large brands pulled their ad campaigns off of YouTube, and involuntarily triggered a boycott of the platform by other brands too.
This included brands like PepsiCo., Johnson and Johnson, and WalMart, amongst many others.
Let me diverge for a bit, and state the un-obvious.
Just a few months ago, something of the kind would perhaps not have gained this form of momentum or impetus.
The fact is that there is a rising anxiety in people’s minds regarding the trustability of the “new online” – everyone has become a little extra sensitive to whatever they see online. The entire Fake News incidents and how people could’ve been manipulated by what they read/saw online is still very raw in their minds.
Add to this the fact that Facebook recently admitted flaws in how it reported ad performance to ad buyers.
With all of that in play, digital advertising has come under greater scrutiny lately, and thus Alphabet’s YouTube problem kept snowballing as things rolled downhill.
To be fair to Alphabet, navigating this issue is certainly no cakewalk. Content worth as much as 400 hours is uploaded to YouTube every minute, and navigating through that much content is obviously not an easy job.
To top it all, as per Alphabet’s erstwhile policies, any channel with a certain number of views were seeded with ads running on top of their videos. Alphabet had not implement, so far, any methods to categorise channels basis the nature of the content they doled out, nor had they formulated differential policies towards the ad-overlays.
Amidst all this though, is one undeniable fact – advertisers and brands depend upon Google’s system to get them the best results. So being the customer, brands’ interests and brand image is paramount to YouTube’s existence, even more than it’s revenues.
Quite a good example would be that of Google’s AdWords, the larger ad business that Alphabet runs across the internet. Over the years, it has been Alphabet’s policy to not stand between the publishers, and the advertisers, for fear of becoming too much of an arbiter of what’s appropriate.
But in the process of making the path from advertiser to targeted audience eyeballs as efficient as possible Alphabet does make a lot of money. So it must then, not shrug away from the onus of responsibility when its systems run into issues.
Even after apologies, and statements promising that steps would be taken in this regard, brands are still pulling the plug on their YouTube ad spend, and Alphabet’s shares are doing the frisky dingo on the charts.
All in all, Alphabet has lost about USD 25 billion to this tailspin that YouTube has hit, and even though that number does pale in comparison to the entirety of Alphabet’s income, it is still quite a big number.
The new policies should be of some relief. After the forest caught the fire, Alphabet has improved its ability to flag offending videos and immediately disable ads. This has led to some advertisers circling back.
With these new changes in the policy, we can expect more advertisers to come circling back. But the question will still remain: Is this going to be the solution?
We think not.
Even though the policies have been changed to broaden the definition of sensitive content, there’s not enough information shared by Alphabet to convince the world that there are now enough checks in place to mark sensitive content as such and treat it differently.
Case in point would be that of YouTube channels like Real Women Real Stories.
Run by Israel-based entrepreneur Matan Uziel, the channel features videos of women narrating to the camera their experiences of sexual abuse. Under even the amended policies, this would be marked off as sensitive content (under the unchanged policies it was marked off as sensitive content, and ads were taken off of the channel).
And there are many other channels of this kind that have, and will be, marked off.
So, even though it may be quite clear what content a channel is running, machines and algorithms don’t really yet know to interpret it correctly. The content in Real Women Real Stories is not particularly offensive, not promoting terrorism, extremism, or violence of any kind – but it did get the stick.
What I’m saying is that there is rather a much-needed journalistic approach to real stories that need to be told. Yet, these channels will receive the same treatment as purposefully offensive content made with mal intentions do. That does not seem fair, and neither do the policies enabling such interpretations.
So, while Alphabet has admittedly taken the first baby steps – only after having been kicked in the gut by the advertisers – yet, there is a long way to go for them to actually be able to work this out properly.
With YouTube’s ad revenues hopefully preserved now, Alphabet must surely realize that the task is not yet done.
Thanks to the return of Nokia 3310, retro seems to be the theme these days.
The appeal of retro technology arises due to the nostalgia for old-world simplicity and grace – quite well represented by feature phones of yester-generation.
We all remember how durable and easy to use those phones back in the day were, and how they would last days without needing to be charged, unlike the smarty pants today that need to be plugged on to the wall more than once a day!
Even though we miss a lot of things about the good old feature phones, the one thing we may perhaps not miss so much, is the absence of connectivity to the internet, nor the sluggish data speeds that came with the rudimentary hardware.
So, what’s the perfect combination then? An LTE-enabled feature phone?
Well, a brilliant article published a fortnight ago by a colleague at Chip-Monks about chip-major, Qualcomm creating a “simpler-phone”-targeted processor and platform, got me thinking – why would 4G be needed on a feature phone or a “simpler phone”? I’ll try and answer that here.
Well, first – 4G enabled feature phones already exist. Like, Kyocera DuraXE and Sonim XPS. But quite strikingly, no one has ever really heard much of them, or about them. That’s because these are niche devices that are rare and oftentimes rugged.
Qualcomm, however, might be working to change that.
The dragon-slaying chip maker recently announced a new 205 chipset, that’ll allow next-gen feature phones (yeah the ones that don’t have a touchscreen, GPS or an app store), to achieve LTE speeds.
Is that the smartest move? you might ask.
While smartphones are on the rise around the world, there are still markets that have a significant desirability for the good old “simpler” phones.
One such market is ours, that of India, and there are others, like Vietnam and South Africa.
Even though these markets have quite a significant presence of, and use of, the good old dumb phones, the demand for 4G on the other hand is climbing as well. Everyone wants a faster network.
Operators in each of these countries are indeed working on building up the infrastructure to support the demand for 4G. Studies predict that by 2020, 80% of all phones will be 4G-compatible. So why should “simpler” phones be left behind? Just because they aren’t fancy?!
With this new chipset, these feature phones would be enabled to access 4G and thus get faster internet, and better call quality (if they run on 4G VoLTE). So, even though they might not be the most capable phones on the market, they might not know how to charm the pants off of you, faster internet will still bring a lot to them.
Better speeds will allow the good old “simpler” phones to be used for browsing social media, upload photos, load news sites and stream video and live sports. This will also enable easy access to financial transactions for people who run small businesses, and enable the use of Voice over LTE (VoLTE) and Voice over Wi-Fi (VoWi-Fi) etc.
The phones that run on this chipset will also run for longer on one charge, which might be quite useful in rural areas or places with limited electricity.
Phones carrying this chipset can be expected on the market in the next few months. So far, the company has partnered with manufacturers such as Megafone, Nubia and HiPad to make 4G feature phones.
Most of their concentration has been on the non-Western markets, with this strategy and India fits that bill quite well.. personally I think Indian populace and entrepreneurial spirit will do well with this enablement.
Huawei Could Well Leapfrog Samsung And Stake A Top-Of-Android Coup
Wearables may’ve dearly wanted this crown, as would have tablets, but if humankind was to choose one product that they just couldn’t live without, it just has to be smartphones.
Smartphones have been the torch of progress for humankind for almost all of the last decade.
And where there’s a flame, there’s attention.
Drexel Hamilton, an American analyst firm has stated that Huawei would almost certainly beat Samsung to become Apple’s main competitor in the coming years:
“I do expect the Chinese to knock off Samsung and that’s probably going to be Huawei,” he said. “I see it as a Huawei-Apple fight in the future, Samsung and probably some smaller competitors underneath them“.
It’s not surprising in the least too. Huawei’s been nipping at Samsung’s heels for a while now – entering every market that Samsung’s at, spending as much money on promotions as Samsung does, and in fact, releasing as many models as it can, to counter every tech innovation in the industry.
All these efforts, however, have strained Huawei.
China aside, Huawei has been working to make a niche in the burgeoning Indian smartphone market too. Their goal is to reinforce their business base and increase efficiency in their consumer services and operations. They are focusing on Indian consumers and trying their best to meet all of our diverse needs, and divergent price points.
Trying to honor the ‘Make in India’ policy of the Indian government, Huawei has set up a manufacturing branch in Chennai.
But these expansion efforts aren’t entirely risk-free, and they do cost the company, significantly. Its network equipment business is already at a low, and make be a casualty for a while.
Huawei, like other phone companies are now making preparations to accommodate the faster speeds of 5G in their devices, so they’ve begun to pull back their network rollouts.
So the question that comes to mind now is whether all their investment will ultimately bear fruit. It does seem at the moment like it could either make or break the company.
Huawei aspires to be the world’s biggest smartphone company in the coming six years. With their unique rotating-CEO mechanism, we can only imagine how much innovation is on its way to the market.
It would definitely be interesting to see how they impress their customers in a world where the smartphone market has become almost completely homogenous, and could well flat-line in the next 5-6 years.
That said, if there’s one company that could challenge Samsung to take second-from-top spot – it’s got to be Huawei. Xiaomi’s out from the pull till such time that they enter developed markets like the U.S., which they can’t do till they sort out their patent vulnerabilities.
Huawei’s sure to be seeing that as happenstance, and make the most of it over the next 24 months or so.
Unless you’re an avid watch collector you’d agree that wristwatches (in their current avatar at least) are living on borrowed time.
The wristwatch is one of the select few objects, that still lives in public memory and reality -despite courting obsolescence.
Yet, for their nemeses – smartwatches – one has to forgo the very question of pragmatism!
A smartwatch is still a template of privilege, one that thrives not on any distinct capabilities, but on how many devices it can emulate (read: copy), not replace.
It is hard to tell if this is a troubling prospect or an encouraging one – the timeline of progress dictates the old to make way for the new. But if the new arrival is just an amalgam of the old, then it is novelty, not progress.
Whatever popularity (can’t call it success, yet) smartwatches have experienced, is not because they’re ingenious as a product, but because they’ve been able to act as a probable all-in-one solution for modernism.
For now, a smartwatch can at best serve as a makeshift backup option. OK for something, but not good for everything.
And thus, the quest to find a smartwatch’s USP must go on.
Someone at Chip-Monks termed smartwatches as “razzmatazz – whose time has not come”. I tend to agree – the smartwatch may have some reason for existence in the future, but at the moment, it doesn’t really make a spot for itself in the world crowded with devices and wearable fitness trackers.
Consider this – they are threatening to act in the same manner as smartphones.
A smartphone replaces the need of a stopwatch, timer, wall clock, thermometer, weather map, GPS and what not. In turn, a smartwatch aims to replace the smartphone itself. But it seems highly improbable, till the solution of a flexible display is found.
Different companies are working to that end – LG, Samsung, Microsoft and even a little-known entity called the Moxi Group from China. Each have their own novel approach, use different materials and perhaps have different outcomes in mind.
Samsung is one of the companies that has been working on a flexible displays for the longest period. They’ve now come up with a new approach to a smartwatch, and the prospect seems to be workable, in theory. We heard about this approach through a patent application that Samsung filed, titled “Display Device And Smart Watch”.
What is interesting about Samsung’s proposed watch is that it is made up of not just one but two displays. The primary display, a round screen dispenses the generic functions, while the second display is built around the rim of the watch. This means that there’s no real bezel on the watch (scratch alert on!)
As per speculations, this secondary display will use it’s ribbon shape to carry specific information that doesn’t need to be portrayed on the main display.
The big benefit being that the user would not need to turn on the display to view critical information like the time – she could simply glance on the rim of the watch! Other information that could be displayed here may be about the weather, date or notifications.
I am reminded of the edge display that Samsung’s current flagship smartphones carry – this ribbon display could well carry similar intimation-related info, and not so much interactive information.
But, as we mentioned at the top of this article, this is seems like another attempt to mimic and replace the smartphone. But the intent is baffling, for most of the users consider the smartwatch as a device of respite, a step away from their phones. Adding more and more smartphone-specific features belies the minimalist benefits of a smartwatch – and may make the choice confusing. At best, it mayn’t serve that basic purpose of respite. Much like the Yotaphone with the e-ink screen at the back – while it did something new, it didn’t do anything that we really needed or wanted, or were missing. Hence it never really went anywhere as a product, and disappeared sooner than it appeared.
The secondary display is a novel concept, we cannot deny it. But in the end, it is just a concept. The patent has been filed, but it is highly possible that this concept might not be a part of the production process anytime soon. But it will be highly interesting for us to watch the trends, eh?
This one is going to be a mind-bending read. So I’m going to write this one slowly :-p
Storage space was at a premium for a long time, till silica became cheaper and external hard discs and onboard storage became minutia. However, this mark down is still only for small scale storage – try buying solid state storage at 512 GB or more, and you’ll have to sell your pinky finger to afford one.
But then that’s because a ton of technology, state-of-the-art equipment and unaccountable intricacy are involved in building the silicon brain. The biggest contributor of cost is miniaturisation.
But that challenge doesn’t seem to scare a few good folks who scoff “is that what you’re calling miniaturisation these days?” These fancy folks also don’t consider silicon as the best dough to play with, when it comes to storage. They’re then ones who you mom told you to be – microbiologists.
The fine people consider a material closer to human existence – DNA – as an ideal medium to store stuff. Our brains, chromosomes and synapses hold more information than the world’s computers, after all. So there must be some truth to their hypotheses!
Confused? Let me try that again. With more specifics.
Researchers from Columbia University and the New York Genome Center have developed a process to store 214 petabytes of data in one gram of DNA. If you’re grappling with that number, or reaching for a calculator, let me make it easier. Sit back and enjoy the show!
214 petabytes is 1.75 million terabytes.
Your phone probably has 32 GB of onboard storage and a 128 GB microSD card for external storage. So 214 petabytes would equate to 1.4 million times what you’re carrying in your pocket. Feeling a little infinitesimally small?
How about this, every word that the world has written, spoken and thought since Time began could possibly fit on a few grams of DNA!
Now that you get the point, let me get back to the story.
A quick crash course on the biology of DNA: You’d know that Protein is needed to keep us fit and functional. Our DNA includes encoded methods unique to each of us, that also help build the proteins within us. The essential make up of DNA contains nucleic acids adenine, guanine, cytosine, and thymine (A, G, C, and T).
Don’t sweat it, you needn’t remember all this – I’ll tie it together quickly.
Back in 2012, tech-aficionados were thrown into a tizzy when Harvard University geneticists, George Church, Sri Kosuri and colleagues encoded a 52,000-word book in thousands of snippets of DNA.
The nucleic acids of DNA were used to encode the data (intending to be stored in the DNA) in binary form.
Revolutionary and mind-boggling as the development was, the scientists were not even remotely near the potential storage capacity of DNA – they were able to feed only 1.28 petabytes of information per gram of DNA.
Last month, Yaniv Erlich and Dina Zielinski with their brainchild DNA Fountain raised the stakes 100 times over. With all due respect to Church and Kosuri’s work, Erlich and Zielinski postulated that they’d identified a process to compress and store a lot more information in the same humble gram of DNA.
The duo had created a new process by which they could encode 1.6 bits of data per nucleotide – which is 60% better than any group of researchers had previously achieved.
Using their process, Erlich and Zielinski have successfully encoded a full computer operating system, an 1895 French film called “Arrival of a train at La Ciotat,” a $50 Amazon gift card, a computer virus, a Pioneer plaque and a 1948 study by information theorist Claude Shannon into that one gram of DNA.
How? Well, this is going to get a bit technical (now that is our understatement of 2017!)
The two geniuses first converted the file into digital codes and compressed it into one file. Using a formula the file was split into short strings of 0s and 1s. The binary code was then translated into the sequences of the nucleic acids. They call this random package of the string droplets.
Each droplet was ascribed a bar code that the scientists used at the time of reassembling the file.
72,000 DNA strands that contained the digitally encoded information were then sent to Twist Biosciences to generate synthetic DNA from a given sequence. This culminated into a vial containing a DNA molecule encoded with their digital code.
The pair of scientists then used their proprietary technology to reverse the encoding process and to read the data.
They found the decoded data to contain an error free version of the original file. Perfectly contained in the little molecular hero!
Bringing it all together
Mind-bending as the achievement is, the technology is currently not suitable (or affordable) for large scale use in regular world. Although the data can be stored for thousands of years, yet the reading, writing and storage of data is pretty slow (and costly). However, that does not reduce the value of the promise that this technology holds.
Data so transcribed can last for thousands of years (since DNA does not degrade over time), in an ultracompact form that can be read by humans or other societies capable of reading and writing DNA.
So, future generations may come dig up the backyard to find fossilised pieces of memories stored in a little box not much larger than a toaster!
Have you ever thought of your favourite virtual haunt, Facebook, as a place where you could find your next job? No, right?
Well, Facebook already is host to a plethora of businesses – 65 million to be precise – that use its Pages product to showcase their wares and communicate easily and quickly with their customers.
Facebook maintains that it is more economical and sociable than maintaining a website – and that may well be true. Especially considering that having a thriving presence, sharing photos, content, customer citations and even new offers is just a matter of a few clicks. Also, unlike a website that needs people to visit it, Facebook takes your business to them, in a place that they spend a lot of their day consuming content.
However, Facebook has just begun to plug a gap that many business owners and employees have both been secretly hoping for. Facebook is rolling out the ability to list and apply for jobs – in a manner akin to services like LinkedIn, Indeed.com, Monster.com and Glassdoor – all through your personal (or your business’) Facebook page!
LinkedIn has dominated the employment scene ever since it launched 14 years ago – but has suffered slight setbacks on two counts.
LinkedIn has been unable to cater to two kinds of people – one, the lower-skilled workers and two, people who are not actively on job hunting spree.
“Two-thirds of job seekers are already employed,” says Facebook’s Vice President of Ads and Business Platform, Andrew “Boz” Bosworth. “They’re not spending their days and nights out there canvassing for jobs. They’re open to a job if a job comes.”
It seems Facebook has exploited these two vulnerabilities that LinkedIn has forever been saddled with. And with this, Facebook is poised at changing the entire game – writing new rules for it, by playing to it’s inherent strengths.
Excited? Well, Facebook is doing this strategically. It recently rolled out the new Jobs feature for users in the United States and Canada, enabling companies to post job openings either on their own page and/or on a new jobs page for free.
“Today we’re taking the work out of hiring by enabling job applications [directly] on Facebook. It’s early days but we’re excited to see how people use this simple tool to get the job they want and for businesses to get the help they need,” said Andrew Bosworth, the company’s vice president of business and platform.
Interested candidates would then be just a click away from potential employers as they can simply click the “Apply Now” button right on Facebook.
That done, their application will be sent through Facebook Messenger with Facebook having pre-filled the form with information like the applicant’s name, education background on the basis of user’s public profile etc.
That’s not all! Conversations between the parties could even happen directly through the Messenger, if so preferred. Though chatting with your potential employer through an instant messenger sounds a little unprofessional, it is in keeping with our times whereby texting is the most favoured mode of communication.
Job seekers can filter their job search as per factors like city or area, full-time or part-time preferences, and type of work.
Interestingly, for now Facebook is charging no fee for its service of advertising job positions or filling up forms for potential employees.
If you are a keen observer, then you might have noticed or known that Facebook was beta testing this feature of seeing job ads on Facebook for quite some time and now since it is rolling out the feature to US and Canada, it is pretty clear that the feature transpired to be pretty successful in its testing stage.
This feature will definitely bring in revenue to Facebook as businesses can pay to transform these posts into ads so that it can gather maximum eyeballs as the ads appear in the News Feed of a lot of people. Also, if users re-share the job vacancies to their friends or simply tag their friends in posts, it will end up garnering attention from a lot of people.
Facebook seems to be interested in roping in the business users and has been working in this direction as it has been pushing Facebook Workplace to its business users.
Facebook’s pitch of reaching millions of its active users who are looking for not just full time jobs but freelancing or part-time jobs looks imminently possible, as a lot of users come to Facebook every day for various reasons like infotainment and bragging to the world about their latest foreign trip.
There’s a catch with this entire matter – but I’m going to discuss that in a future article.
And no, I am not talking about the underlying assumption that users would like to use the same platform for serious stuff like looking for a job.
There’s a significant reason why some people would continue to head to to websites like LinkedIn to land up their dream job. We’ll cover this soon, promise.
LinkedIn dominates the employment scene but its 467 million user count definitely falls short in front of Facebook’s 1.86 billion active users. Add to that Facebook’s appeal to middle skilled or lower skilled workers – is definitely a mountain too tall for LinkedIn in it’s current avatar.
Whether this is the perfect recipe for success or not, only time will tell. That said, get your CV polished up… and your Facebook profile 😉
Important: Trump Is Going To Kill Your Privacy.
The image isn’t new – a big burly guy breathing on a huge monitor in a cold room, wearing overcoat and gloves. On the screen is someone’s smiling face, and the photo of the coffee plus cake they so casually had yesterday. He scrolls through their wall, till he knows what they eat, what they drink, who they supported in the election and their ideologies. The guy on the screen is still sleeping in his comfortable bed, blissfully unaware.
This is not a scene from an Orwell novel, instead, this is what is going to happen to you in the next few months as everything that you have searched on the internet will be up for sale, with all its preciousness and banality.
The U.S. House of Representative three days ago, repealed a law, rendering it permissible for the companies to collect Internet data of their users and… sell it as a product. The Senate voted to turn down the Obama administration-era rules that were decided as recently as in October 2016.
The rules that were decided upon then, didn’t come into effect in the interim. So, in reality the ISP’s are still sending your browsing history to other companies.
What was the Privacy Bill of October 2016?
The bill forced the Internet Service Providers (ISPs) to respect the privacy of their users. In order to sell the data of the user to a company, the ISP needed to ask the consent of the user.
For example, if Amazon wanted to pitch a particular advertisement about a tent to its customers, then the most profitable way to go about it would be to pitch it only to people who had already googled for the catalogs of tents.
That would be a case of a company wanting your search pattern data.
Another way would be to pitch the advertisement exclusively to those who lived in the city and looked for mountain regions constantly on their devices. That would be the case of a company wanting the geological data of users.
ISP providers thus theoretically sit on an almost infinitely valuable Data Chest, using which would increase the profit of their companies tenfold.
Worse, it’s not just companies only who want such data – politicians too need user-search-patterns to check for the voters’ awareness regarding certain bills and rights, App Developers need your search patterns to develop their products and so on.
So now, if the repealing of the law is done (and given the corporate inclined sentiment of the ruling party-it almost seems imminent), then the ISP providers will not have to ask for consent in order to dole out data to companies.
What worse is that time and again, hackers have proved that data stored by Google and other companies can be broken into and stolen. Which means now almost anyone can put their lens on you, given they are willing to go at lengths for it.
One of the more troublesome aspects of this repeal is that now the privacy ombudsman such as FCC cannot even formulate adequate rule against the invasion of user data. According to Kade Crockford, director of the technology for Liberty Program at the ACLU of Massachusetts-“This is a disaster on basically every front”.
What can you do now?
This event can now propel the invention of all-in-one privacy barrier software. Some of them are already there. For example, TOR is a browser which obfuscates your location across the internet, making it harder for trackers to find your data. Other methods are Virtual Private Networks, Using of HTTPS version of sites over HTTP ones etc.
In short, be prepared, for only ye know not what ye hide.
The recent ban that banished laptops and other big-screened devices from aircraft cabins flying to the United States from Europe has caused a considerable amount of distress to travellers on the aforementioned routes.
Travellers are now required to check-in their laptops, tablets and large phones. Along with that in some cases, the risk of losing ones laptop which houses several confidential and sensitive information is also too high and as a result of that firms can altogether ban their employees from travelling with laptops.
This has affected nine airlines and selected airports in Egypt, Jordan, Kuwait, Morocco, Qatar, Saudi Arabia, Turkey and the United Arab Emirates – and countless passengers who are suddenly hapless and finding themselves rendered inefficient.
However, certain potential solutions to tackle this problem have also emerged, and truth be told, some of them will change the way business happens.
The solution? Use technology that was built for exactly this reason, but we didn’t really take to it too seriously. Telepresence, take a bow!
Due to the restrictions that are imposed post the unfortunate 9/11 attacks, furthered by subsequent natural calamities such as tsunamis, volcanic eruptions and even the outbreak of diseases like SARS, several business entities took to reducing business travel to the bare minimum.
Video conferencing (a.k.a. Telepresence) helped business enterprises cut down their business travel expenditure, ensure business continuity at times of such calamity while also reducing exposure to employees.
To counter the current inconvenience (and to significantly reduce your company’s carbon footprint too), you can use technologies like HP Halo Telepresence videoconferencing or Microsoft’s HoloLens which thanks to Skype can help you conduct a face-to-face meetings with your clients, colleagues, or boss sitting anywhere in the connected world.
Travel restrictions have also led to the rise of several other methods that help employees carryout their tasks without a lot of problems.
One such method is a laptops-as-a-service that’s available now in several business locations across the developed world. All you need to do is to pick up a rental laptop from the destination airport. These laptops come with a secure cloud-based access to company information and applications and once your work is done you can drop it off at Departure. Sounds easy, no?
More than anything else, such technologies may even have a favourable effect on employees who do not like staying away from their families every time a business trip is assigned to them – a lot of us would actually prefer a 3 a.m. video conference instead of flying all the way from London to Beijing for one meeting!
HTC is undoubtedly already sweating over the fact that its contract for the production of the Google-owned Pixel smartphones is about to run out as soon as the second version of the Pixel series hits the market later this year.
HTC’s two year, two device contract for producing the Pixel phones was a huge feather in HTC’s hat, and the tremendously accruable reactions from Pixel #1 must’ve been hugely comforting for the brand that’s been struggling to receive market-sales-acclaim. Despite critics’ approval for the HTC 10, the Taiwanese manufacturer has been seeing a consistent slide in market demand for it’s devices.
Now, with the contract expiring, the 2018 Pixel job sheet is what HTC and every other Android-major is competing for.
The Chinese Publication Commercial Times reported that LG is currently leading the race and is the most probable winner for the production contract of the device currently being dubbed as ‘Pixel 3’. The order is estimated to be for 5 million units for just 2018.
LG and HTC have a long history of fighting tooth and nail for Google devices – LG has been Google’s comrade in its Nexus devices’ tryst, while HTC also has an impressive record of its involvement with Android, the Nexus project and most-significantly the ultra-important Pixel lineup.
As of now, Pixel 2 is expected to hit the shelves with features like a water resistant body and the possible capability of Quick Charge 4.0 (basically you charge for a lot less interval and enjoy a lot more).
The success of Pixel could be gauged by the fact that Google is having a hard time meeting the market’s ever-rising demand. This after HTC having already shipped 2.1 million Google Pixel and Pixel XL’s last September, the report suggests.
It will be fun to watch who finally gets the cake. Nonetheless, it will not be wrong to say that Pixel 2018 has already started to garner attention even before the beginning of its production – and that kind of enthusiasm is always a great morale high, in today’s whimsical and crowded marketplace.
App Intro: 100 MB - Tendulkar's Building His Own Social Network
We’ve seen him chasing down impossible targets, we’ve all shouted his name in stadia and at home, we’ve celebrated his life and suffered his retirement from cricket. Heartbreak new a knew name.
Well, time to rejoice. Sachin’s fans can now connect directly with Sachin Tendulkar via his mobile app named ‘100MB Cricket’.
Envisioned by Sachin himself, the app was conceptualised and developed by JetSynthesys, with the intent of connecting with people who cared for Sachin and wanted to know more about him, his views, and to interact with him.
When asked what pushed him in this direction. the cricket legend replied, ”Many of you and my friends have asked me what are you going to do in your second innings of life. The first innings went on for 24 years. There are a number of things which I am involved in, but something which I am looking forward to starting soon, in fact now, is the digital innings”.
Sachin had promised that the app “will show a different side” of him.
To leave people with no doubts about his sincerity in the matter, Sachin added a bonus – showcasing his singing skills!
Tendulkar surprised everyone when he shared stage with Sonu Nigam at the launch of the app. They sang a duet called Cricketwalli Baat, which is also available with the app.
“I was fond of music, but never thought I will sing a song“, he said with a twinkle in his eye.
When questioned whether he would interact with the young dreamers he said, “I’m absolutely ready. I like spending time with youngsters and I will be interacting with young cricketers for sure“.
So, all of you out there whose heart still sings for Sachin, head on out – get the app (iOS and Android) and get talking with him!
Microsoft, iPhone, Galaxy S8 - All Under One Roof?
From automobiles to smartphones, Microsoft has it’s fingers in all sorts of pies.
And it’s been picking up some new tradecraft too. As it expanded it’s footprint with its “Surface As A Service”, folks at Microsoft actually realised there was a play they could make in the retailing sector, that might benefit both, Microsoft and some of the world’s largest brands, no matter if they be competitors.
Clearly, this is a Nadella-pivot – I guess he knows better than anyone else the Windows Phone isn’t going to really be able to take off (not to stratospheric levels at least). So a couple of years ago, he’d (much like John Chen of BlackBerry) decided that if he couldn’t get the platform into people’s hands because of the floundering apps store on the Windows 10 platform, Microsoft may as well peddle it’s (fairly excellent) apps on Android and iOS, and make a dent in those universe.
Keeping with the SEI (a.k.a. Surface As A Service) model, the technology giant has reached an agreement with Samsung, to retail the Galaxy S8 and S8+ in Microsoft stores. Not only that, these devices will carry some prestigious Microsoft apps out for the box, and will actually be branded as Microsoft Edition models of the Samsung duo.
With that win in it’s hip pocket, and playing the poker face, Microsoft announced a deal with yet another company. You might have heard of this company in some corners – it’s called Apple Inc.
Yup, you heard it right! Microsoft and the Cupertino based monolith have struck a deal. While all the details of the deal aren’t yet public knowledge, the word on the street is that you’s now be able to get iPhones in the same Microsoft Stores as Samsung’s Galaxy S8 and S8+!
So, what does the deal imply? Obviously greater profit for Microsoft. And the unique honour of having it’s apps bundled on iPhones.
It is an honour – the last third party app that iOS had bundled on its iOS (other than Facebook and Twitter) was Shazam. So it’s no small feat for Microsoft to have eked out that privilege.
There’s something else that trade pundits are saying may perhaps be on the offing – a certain amount of Microsoft-oriented customisation. Per the pundits, if you want to swap Siri with Cortana or replace iCloud with OneDrive then you might actually be able to do that on this special-edition phones. But I’m holding my verdict on that conjecture, for the moment. I’m not sure why Apple would allow for something as basic as the Assistant to be replaced (especially since we’re also hearing of Siri playing a much larger role in the upcoming iOS 11).
On the swap-ability of the iCloud supposition – I can pretty much laugh that one out of the building – Apple would never let anyone tinker with something that’s become a critical backbone of the entire iOS ecosystem and even more critical to the user experience since it holds up central bridges like device backups, Continuity, Handoff, iMessage, iPhotos et al!
Back to the “why” of it. While some may think Microsoft’s biggest win for these will be profit from retail sales, however I believe it actually is the overnight windfall of millions of customers on the newest and most cherished flagships in the world that Microsoft is vying for. That, for me is the brilliance of this pivot from Nadella.
A Microsoft representative stated, “Our deal with Apple helps us provide customers with easy access to our services even if they choose a different mobile platform. We respect everyone’s decision to use Android or iOS, and this is why we’re trying to help them make no compromise. Bringing Microsoft apps on as many devices as possible is a priority“.
Some folks claiming insider-information have stated that the iPhones too will be labeled as Microsoft Edition, and might even sport a Microsoft logo on the back of the iPhone.
Personally, I find that hard to believe. For a brand as proud of their logo and as gritty about keeping their devices free of any other branding per se, I very seriously doubt that Apple would allow any form of additional graphics (dare I call it graffiti on their premium devices).
The Microsoft branded iPhones will hit the stores on April 1. We’ll know more then.
Telecom in India (and everywhere else in the world), is a marathon – actually a steeplechase. And like any long-range sport, the rules are simple – keep running, stay in the game, but conserve your resources with the end game in mind.
Only those who pace themselves and stay alive to competition can even hope to be anywhere near the finish line.
But newcomers to the sport often make the mistake of speeding up too early, the trophy looming large in their thoughts. Some even consider the more staid approach as a needlessly pessimistic gameplan.
Consequence? More often than not, they burn out just 10% into the race.
Is that happening in Indian telecom? Seems so.
From the start, Reliance Jio was saddled with three weights around it’s ankles – being the newest entrant in the telecom game, it needed to make a loud splash (to get people’s attention), it needed poach away customers from their current providers since there aren’t too many new customers to be had in the already-overcrowded Indian telecom market. Most importantly, it (Jio) was saddled by it’s own seven-year-long run-up to launch. Behemoths have been built ground-up in lesser time – ask Google, or Facebook!
Anyway, in order to achieve the first two objectives, Reliance did what Reliance does best – appeal to Indian customers through their wallets – freebies are the currency that Reliance often relies on, but this time it seems to be backfiring on them.
When the network launched to the general public a little over six months, it completely rocked the telecom market in India. Offering all it’s wares for free, it created a shockwave in a market that is price-sensitive to a fault.
People switched to Reliance Jio in millions, making it the fastest growing network in the history of the world. Customers flocked to it, to experience the network (but between you and me, most were just happy to get gazillions of bytes of Mobile Data free). Customer acquisition numbers were bandied about, newsrooms aghast, and barrels of newsprint ink spilt toasting the new entrant. Last I heard, over 122 million people had signed up with Jio. That is a staggering number for a market already thriving with eight other operators.
At Chip-Monks, many a water-cooler moment was spent dissecting the reality people had forgotten – most customers had bought additional connections to get onto Jio – they hadn’t transferred out of their original provider’s network! That, by itself, foretold what would happen once the freebies were retracted. People would have to choose whom to spend their dime on.
And in the telecom world, that spend is decided upon three things – network quality, customer service (actually the lack of the need of it) and finally, cost.
The first factor itself became Jio’s stumbling block – as people experienced Jio, they realised that they were sacrificing network quality for the pennies they saved, gaining Data quotas they couldn’t use because the service didn’t always work, speeds were lower at 4G than other providers’ were at 3G. So there wasn’t much in there for customers to appreciate.
Now, as freebies come to an end, so does the novelty.
The question thus became – what would Jio do, to hold onto these 122 million?
Answer: follow Amazon and create a “Prime” club. But instead of customer experience being the calling card for the club, Jio fell back on what it knows best – more freebies.
Launched five weeks back, on the 21st of February, the Jio Prime offer provides a year long supply of Data and calls at extremely subsidised rates. Paying just INR 99 to enrol with Jio’s Prime program and only INR 303 (which is less than US$ 5!) gets the member 28 GB of 4G Data and free calls to anyone in India!
Sounds like a sweet deal, right?!
Well, market response to it does not seem to reflect that – of the current 122+ million Jio customers, only close to about 16 million have signed up for the Jio Prime offer.
That pale number, is then the answer – not many people are interested in Jio when it’s not free.
Embarrassingly, the number is less than half of the target Jio had reportedly set for itself internally.
Just two days from now, Reliance Jio will switch over to being a paid service for the first time and this lukewarm-at-best response, must have Ambani worried – especially after pouring in an additional US$ 4.4 billion into Jio just two months ago.
Has Jio tired itself out already? Did it’s strenuous 400-meter dash cost it a place in the marathon? We’ll only figure that out as time passes, and Jio’s mettle is further tested.
In the meantime, we see this as an opportunity for all the other operators in the market to gain back some of the space and the customer-sovereignty they may have lost during the run. That, however, is also not going to be easy.
The telecom industry in India is struggling, to say the least. The reasons are many and it’s not pleasant for anyone. There is looming debt, over every big wig in the industry, and Jio’s brazen entry only made it worse – the operators that were already struggling to make certain ends meet were forced to drain their resources further, to stay alive in the cut-throat competition.
To combat his new annoying neighbour on the block, who was enticing the world with ‘free service’ coated candy, competitive offers were put out on the market by each operator – Airtel, Vodafone, Idea and even BSNL. While we don’t have numbers yet, however it’s clear – the customer benefited by the turf war. And that’s always a good thing.
Customers saw a hitherto unseen benevolent and generous side of their telecom operator, and speaking from personal experience – it was a good feeling, to be wanted, and to be valued.
Going back to my earlier statement, Indian telecom customers value network quality and customer service more than cost – and each Telco needs to focus on these two critical elements if they are to remain in the marathon. Jio’s mad-dash and how it caused the somewhat-complacent existing operators to up their game, should remain in their minds for some time to come.
Back to Jio – not having achieved the intended numbers that Jio had set for itself, the question arises – will Jio extend the deadline to sign up for Prime? And if it does, will others extend their competitive offers?
Time will tell, but I am forced to recall my closing words on my earlier article about Jio and it’s approach –
“All said and done, Jio has set many ripples in the water, however since it has to wade the same waters itself, one would think it’ll learn to swim instead of splashing around“.
The unresolvable tussle between Android and iOS has always been a source of great contention amongst Tech Geeks. Often times, this conversation turns to roughhousing, thanks to the irreconcilable nature of the topic.
I like to call it the “What came first; an Egg or a Chicken” question of the Mobile Phone Universe.
There’s never been an articulate outcome adjudging one of the two software as the better one. Eventually, the only way the dispute gets shelved without fists being thrown, is to agree that while Android runs on majority of the smartphones, iOS is what defines smartphones [that too causes another long, vehement round of words, but the volume at which interjections are thrown reduces, somewhat].
That said, most sane folks can’t disagree that Apple is the Utopian version of class, luxury, style and comfort, and practically any gadget manufactured by Apple is the epitome of perfection.
People are also in general agreement that Apple’s products are prettier, largely because of the immeasurable amount of time the team of Designers, Developers and Engineers at Apple spend on inane questions like “How many holes should the speaker grille have this time?“.
The point is, Apple’s attention to detail and it’s ability to focus on form and function is nigh near unparalleled.
That said, in the last three years, Apple hasn’t made any significant visual or design change to their mobile phones, and it’s getting boring. Sales figures may belie this claim, however we at Chip-Monks talk to enough people to have a fairly good idea of customer sentiment.
Last year, Apple released many only two phones – the iPhone 7 and the iPhone 7 Plus, unfortunately they both garnered more snickers than positive attention to changes, because of the removal of the 3.5 mm headphone jack, which resulted in yet another after-market product that needed to be bought – the newly invented Apple AirPods (Apple’s wireless earphones).
Apple has undoubtedly become boring – even to hard-nosed supporters. iPads’ are forgiven to a certain extent because they aren’t Apple’s prime product anymore (and because everyone who needs an iPad, almost always has procured one, and doesn’t really need to change it often). But iPhones, MacBooks (Air and Pro, both), iMacs – are all as Mr. Jobs left them. And that product fatigue is visible, without any doubt to anyone whatsoever.
So, Apple needs to do a 2007 again – reinvent smartphones, including the iPhone.
The world (at least that part of it that doesn’t hate Apple for any passionate reasons, including pedantic and existential ones) is still faithful to Apple’s abilities. Most of them are hoping that Apple picks up it’s magic wand this year, instead of a carbon paper – and does something that justifies it’s exalted, almost revered status as a manufacturer.
One of the changes that a lot of people seem to want (although not knowing why) – is for Apple to move to OLED screens. It bears repetition – they don’t know why they want OLED, but they just do (perhaps because it’ll get their Android buddies off their backs about the last-resort “the iPhone still uses LCD!” insult).
Till a year or two ago, Apple nay-sayers used to rub Apple-friendly folks in the face with “Android allows so much customisation”, but that jibe’s been losing it’s credibility as iOS 10 allows almost all that Android does, without the vulnerabilities associated with such “freedom”.
Coming back to the topic at hand…
With Apple’s 10th anniversary around the corner, it is believed by many soothsayers that the company is going to bring in a significant (hopefully) change to the iPhone’s design. The brand is said to be “all set” to embrace OLED technology and incorporate it in their prospective model – “iPhone 8” (or whatever it will eventually be called).
Reports and gossip surrounding the device also suggest that the iPhone 8 will feature a nearly-edgeless display with incredibly narrow bezels, with specialized sensors on the sides.
To date, Apple has relied on Gorilla Glass for all their phones, with no fancy footwork on the display (save for the pressure-driven 3D Touch).
While Apple already uses curved glass for its watches and current iPhones (so it kind of defeats any “edge” related advantages Samsung may claim), but it’s undeniable – scientifically OLED is an advanced choice of hardware.
The rumour mill says the so-called iPhone 8’s front is going to be an all-glass-with-OLED panel that wraps around the edges. Should OLED actually make to the iPhone, it will be a major change in the elemental design and would hopefully help Apple eradicate the last major source of mirth in the extremely competitive mobile phone market where even one aberration stands out like the proverbial nail begging to be hammered away at.
Some consider OLED screens being manifold better than their LCD competitors (at Chip-Monks though, we’re not sure… not because we don’t understand the differences, but because all iPhone screens have so far been excellent display units, despite being LCD panels).
That said, an incredibly thin and stylish OLED screen will provide insanely vivid colors, brighter and clean contrasts and may end up being more power efficient than the LCD screens used so far.
However, everything comes with a price. Use of the new advanced OLED Screens is not going to serve Apple too well – primarily for two reasons.
The first being the cost of OLED display, which tend to be much higher compared than those of LCD Screens. While Apple may spring for it, and absorb it in their own costing, however any damage to your iPhone 8’s screen is would result in you shelling out significantly higher sums of money to fix it.
The second reason being the insane volume of OLED screens that need to be manufactured to meet the demand for iPhones.
Apple being one of largest manufacturers and sellers of smartphones, have to maintain their reach and for that to happen, it requires ridiculous volumes of OLED Display. And with Samsung being the only viable supplier available for 2017 (at least), puts all of Apple’s OLED eggs in one basket, which may not be a wise choice for them (especially given that all of the last 4-5 product launches from Apple have faced more-demand-than-supply issues).
We wrote about both these conundrums in two articles: the first when we covered conjecture around Samsung being Apple’s vendor of necessity for OLED screens and the other that covered Apple’s next roadblock of insufficient supply.
Apple’s wave of “upgrading the display” in a way proves futile because it’s conceivable that all of its competitors are going to provide the same display quality. Google’s Pixel 2 and Galaxy S8 which are to be released this year may sport the same OLED display. In fact, the Galaxy S8 might even launch months before the iPhone 8.
So, it looks like Apple is in a pickle after all.
That being said, rumours around iPhone 8 only strengthens our beliefs that Apple is going to have to do a lot this year, just to get out of the rut it’s fallen into over the last three years.
Amazon’s Automated Retail Store Hits A Ceiling - 20 People, Max!
Amazon, the American retail giant that has lately been scampering ahead with technology R&D of many shapes, was all set to open a cashier-free convenience store in the U.S. But it’s run into a bit of a glitch.
Amazon has, for a while now, been trying to build stores that would let customers simply walk in, pick up items, and then walk out. The customer would be automatically billed to her Amazon account, without any need to wait in line, get things billed or swipe her card, in the regular tedious way of shopping.
A store was supposed to open early this year, but, due to some technology related quirks, they’ve had to put the launch on hold for a bit.
It turns out, the store – dubbed Amazon Go – currently only functions properly if there are fewer than 20 shoppers inside it at any given time. Put any more people inside, and the shopper-tracking technology breaks down. Perhaps because the system finds it too difficult to concurrently follow that many people.
In addition to dealing with the number of customers flowing into the store, another problem this store-of-the-future seems to be having is that of tracking items that have been moved from their proper place on the shelves.
So you pick up that soap bar, put it in your cart, and later change your mind and put it back on anther shelf in another aisle, then the item gets lost and the system is no longer be able to track it.
These quirks, natural as hiccups may be in new technology – after all, no one expects these things to work smoothly from day one – come as a setback to Amazon, at this late juncture in their plan. Amazon had been intending to open the store by the end of March, this year. But from what it looks like now, Amazon’s back at the drawing board on the technology front – trying to iron out the kinks. And that may take a while.
Having established themselves as the world leader in the online retail world, Amazon has been keen to expand into retail stores, for which, this technology is critical to set themselves apart.
Clearly, Amazon wants the stores to be attractive for reasons other than just the brand’s name or the prices on offer.
Amazon’s expansion drive kind of came to the fore when they opened their brick-and-mortar bookstores – five of which are functional right now, and five more of which will be functional soon. Their plan to open a furniture store that would use augmented reality to help shoppers visualise is another element of their ambitious expansion plans.
This next-generation of technology-enabled retail stores would have really served to distinguish the company’s retail strategy.
That said, we at Chip-Monks have always held Amazon in the highest esteem for their tenacity and their amazing ability to deliver. Believing in Amazon’s potential, we would be willing to take bets on the store being up and running to the public, before the end of this year, perhaps in just a few months.
These guys have always been the kind who, when they see a problem, usually buck up and get it sorted out.
Amazon, Go, create the store of the future! Come on, we have our fingers crossed!
Uber's Autonomous Car Involved In A Crash - Here's Why It's Still On The Road
While Uber is really out in the ring to get the self-driving cars going, they’ve been on a roller coaster the last few weeks.
In the midst of organisational chaos (we’ll cover that in a minute), one of the company’s self-driving cars ran aground, testing was suspended post the “crash”; fortunately to be resumed just a few days later.
The news of the incident did raise a few questions – Was this just a set back from a crash, or something bigger is going on? Where is the program really headed?
The objective of this article is to help answer those questions for you.
First up: What Happened?
Well, the chips started falling over the weekend when one of Uber’s test cars became the subject of a collision car crash in Tempe, Arizona.
Uber’s Volvo XC90 rolled over onto its side when the driver of another car failed to yield. The Uber car, at the time of the crash, was in autonomous mode even though it had a driver and an engineer seated in it, as per the standard requirements of testing self-driving cars.
Photos and videos posted on Twitter showed a Volvo SUV flipped on its side after an apparent collision involving two other, slightly damaged cars.
There were no serious injuries as a result of the crash.
The Tempe Police Department and Uber investigated and subsequently clarified that the causal factor of the car crash was not Uber’s self-driving car.
In the three days that the program was suspended, Uber itself reportedly wrapped up a quick investigation and cleared its autonomous car for further testing.
Now, testing has now been resumed in all three cities where Uber operates it’s self-driving pilot program – Tempe, San Francisco and Pittsburgh.
Is there really a cause of alarm to have self-driving cars tested on public roads?
Uber’s test program is running in two other cities in the U.S., and there have been no reported collisions or accidents involving Uber’s autonomous vehicles despite months of testing.
And it is quite logical to expect that programs like these will have a few incidents; no one can quite expect computed machines and algorithms to already be 100% reliable. Really.
These programs are still in developmental stages and the algorithms and controls take time to perfect. They are unbelievably complex and need to be refined with real-world inputs and situations – which is precisely why they are called “test” programs!
As we’d asserted earlier when a car from Tesla was involved in a crash – the testing of autonomous vehicles is important and necessary. Experts agree too.
“Driverless cars keep getting better the more they drive, whereas humans have a roughly constant safety record over the years“, said Hod Lipson, a roboticist and professor of Mechanical Engineering at Columbia University. He even cited an important statistic – an estimated 23,000 traffic fatalities happen per week, globally.
“The idea that somehow a human driver makes the drive more secure is false comfort, and potentially dangerously misleading“, he added.
So, like we have said before if programs like these make cars even one percent safer, that is still thousands of lives saved every week, at least. And that is worth a lot more than whatever jab anyone can take at programs of the kind really.
What adds more to this, for Uber at least, is that the incident comes at a tough time for this new-age icon. The company, has over the last few months been dealing with a crisis of sorts, involving its workplace culture, and business practices.
First, there was the discovery Greyball, a tool which the company reportedly used to skirt the authorities cracking down on Uber drivers worldwide.
Then the company was hit by allegations once after another about discrimination in the workplace based on gender.
Then Travis Kalanick, the brain behind the company, was forced to apologize for his aggressive behaviour after two videos of him in a verbal altercation with Uber drivers surfaced.
In this case, even though Uber is technically not at fault, for the accident, and no one expects programs like this to run without a few bad peas, it might still be problematic for the company.
The San Francisco-based company has in the past, gone head to head with regulators and critics, as it tried to get cities to agree to the testing of autonomous vehicles within their bounds. Google, General Motors, and Ford, on the other hand, are doing all their testing in California, where they are registered to do so.
So, going out on a limb, and getting people to agree to this kind of testing, outside of controlled spaces, and then a crash that becomes the center of negative-spotlight – can quite expectedly, prove quite problematic for the company. The only way they could fix the self-driving incident is if this is proven as the only bad pea, and nothing else of the kind happens again anytime soon.
A lot of us (in the Tech world) want Uber (and Tesla) to succeed, and the world needs to understand the importance of what they’re attempting and how complex the entire bag of tricks really is. So we all need to be patient, and tolerant.
Meanwhile, Uber, stay strong. You’re going to pull it off, are you?!
With more and more worry surrounding the hackers’ claims that they might have direct access to 600+ million iCloud accounts and Apple IDs (more news on that here) we need you to do some really simple things, to protect your data and save yourself lots of heartache later on.
We at Chip-Monks still believe in Apple and it’s ability to protect us, however it is a very good thing for you to do all of the below, periodically too.
There are five basic you need to do right away.
Having done all of the above, we believe your account should be quite secure, and you shouldn’t have much to worry about.
We would also like to reiterate that the evidence that the hackers actually have access to that many account details is pretty thin. Also, it is doubtful that the breach is on the part of Apple.
What could be a possible reason for the hackers having any information at all is a third party leak, and the fact that we use the same passwords everywhere, is the real vulnerability that the hackers may be intending to exploit.
As more on this unfolds, we would like for you to treat this very seriously as a warning and to urge you take the minuscule effort to making your accounts more secure, and less as an instance to panic. If you are doing the latter though, you can check here on how to survive an Apple iCloud wipeout.
We will keep you posted on more.
Here's What's Behind the Airport Ban On Tablets And Laptops
A few weeks ago, the air travel authorities in the U.S. and the U.K. banned travellers from certain countries from carrying larger gadgets like tablets and laptops onto flights’ cabins. All one could carry onboard now was a smartphone. And that had people see red!
The restrictions were placed on flights originating from Middle Eastern countries and some countries in northern Africa. At the time the Authorities were able to only give vague reasons citing as “security threats”.
But, as it turns out, they were actually on to something.
When the U.S. implemented the ban, the majority opinion was that it was basically another of the Trump administration’s authoritarian mandates. And people did not take too kindly to the travel ban. However, U.K. was quick to follow suit, and when new reports surfaced, the background to the ban actually started coming to the fore.
Stories have recently been uncovered, of a plan to tuck explosives inside a fake iPad. It is not clear if this supposed tablet bomb is even real or not. But it now clearly appears that is not the authorities being overtly cautious in the throes of paranoia.
It’s a small step to take, in the face of what does have the potential to be disastrous.
Here’s why we believe that the authorities are not being “overly-paranoid”. There is a precedent for such attacks.
Back in 2016, a terrorist blew a hole in a Somali airliner with a “laptop-like” device. Thereafter, the concern that terrorists would try to target flights with explosives started mounting again, even though we for the last couple years have been more concentrated on other forms of terrorism.
The authorities expect the terrorists to get more and more creative, and electronics are honestly, quite a viable conduit.
An iPad bomb, or a laptop bomb, is not easy to detect, and such devices are always carried in the plane’s cabin; and an explosive in the cabin, can be angled towards a window or a door, and that would have a much more devastating effect that an explosion in the hold area would. This would easily putt hundreds of human lives in danger, and just as importantly, providing an ace of spades to the terrorist, to coerce the pilots and staff, to bend to the terrorist’s malicious intent.
It also goes without saying that there is a lot of the information that we (civilians) do not know yet – about any other imminent threats. The move to not let devices be carried into the cabin is thus a good one, though, even though you might hate it at the moment.
There are other Western, and western-oriented countries that are still mulling their approach to this threat. While they have not yet placed such a ban, however discussions are apparently on at higher levels of respective governments.
France is reportedly considering a ban but has not yet made a decision.
A spokesperson for the Dutch government said: “We are constantly monitoring the situation. At the moment we don’t see reasons to introduce similar measures”.
Belgium said it would not introduce a ban without a decision from the European Aviation Safety Agency, the EU body that develops pan-European safety rules.
Australia said it would monitor the new arrangement but was not at present planning to follow suit.
I don’t know if this is purely a security risk, or if there are other political, or politically-oriented factors involved in the two countries having imposes the ban. But if the threat is credible, we must look at the move in a circumspect manner, and be supportive of the restrictions.
And, you might just sleep better on flights – for more than one reason!
In light of the recent threats that some hackers have been making to Apple (more about which you can read here), we believe it is a good time to have this particular conversation.
Even though we stand by the popular belief that the threat being made by the hackers, of wiping out millions of iCloud accounts, is bogus, at best, we are going to spend a little time being the Devil’s Advocate, and helping you ensure that you’re safe.
Let me say that we at Chip-Monks, have immense faith in Apple taking utmost care of our data, our devices and the security of everything we’ve been trusting them with. I’m sure Apple has checks and redundancies already in place.
But, we’re only trying to prepare you for what is the worst that could go wrong, if you’re one of the people who likes to be cautious and self-dependent.
That statement notwithstanding and irrespective of whether you’re backed up to the iCloud or not, I super-duper-highly recommend you drop all that you’re doing and manually back up your iPhone, iPad and Mac (if you’re on one), to a local storage device (like an external hard disk or your PC’s hard disk). Immediately!
Let us begin by making things easier, and dividing an average person’s data on her phone/device into two categories – one that is primarily on the device’s local storage (let’s call that On-Device for this article), and one that is primarily on the cloud that your device is connected to (let’s call that Uploaded for this article).
And yes, we are going to go by the assumption that you, like most people today have already enabled cloud connection on your device(s).
If you haven’t, then we strongly recommend that you do!
So, of the two major kinds of storage, the On-Device Data is the kind that resides primarily on your device’s physical storage. This mainly includes things like contacts, calendar, photos, and any other kinds of notes etc. you have been making.
The second major kind of storage is the one that is usually primarily on the cloud. Often times Uploaded Data, even if it exists on your device’s local storage, chances are that you’ve actually downloaded it from the cloud, for offline access. This majorly includes the information on your cloud-linked mail account, your music, apps and videos, and such.
So, we are going to start with your to-do list to secure yourself.
If you are fully within the iOS & OSX ecosystem, meaning your computer as well as phone and your tablet (if any), are Apple products.
The first thing you should do is backup all your data, to your computer. Manually.
You can back up your calendar, your contacts, your photographs, etc., simply by transferring them to your computer. Now, this sounds easy, but it gets tricky as you move on to your second kind of data, your Music, Apps, and Videos.
What we have so far been used to, is that if something does wrong with our device, we can easily retrieve the information off of the iCloud, which is basically the backbone of the structure. However, what we must realise with this recent threat is that the backbone may under certain circumstances not be infallible – it can also be threatened, so we must not take it for granted.
Thus, if the iCloud account is wiped out, what you lose is all your data on the cloud in the secondary category – your Uploaded Data.
You can currently download the Music part of your Uploaded Data and back it up to a computer, or even to the local storage of your phone itself (if you have the space on the phone/tablet). But that’s about all you can shore up/secure locally.
Apps, movies you’ve bought, and even your financial transactions with Apple could be irretrievably lost if your iCloud ID were to be wiped out erroneously or maliciously.
The most you can do for your Apps is take screenshots of your apps and related purchases (preferably from your Desktop-based iTunes software) and hope that the Doom’s Day prediction does not knock on your door!
Now lets talk about what to do if you’re using a non-Apple computer.
Well, you mustn’t worry. Even though the systems are disparate, basically meaning that you have an iPhone for a phone, and a Windows OS computer, you can still back up the On-Device Data data to your computer. Of the Uploaded Data, you can at best, download the music you’ve bought and save it on your device’s local storage, or computer’s local storage, via iTunes on the Windows PC. But that’s all you can pretty much do if the iCloud backbone is threatened.
There is another storage hub you can look to – which is neither local, nor iCloud related, and that is the giant Google.
You can push all of the On-Device Data as well as the Music you’d have downloaded from iCloud on to Google’s storage options instead. Link up your calendar, sync the iPhone’s contacts to your Google mail account, store the photos on the Google Drive or Google Photos, link up your notes to your Gmail account, etc.
But none of those options really stand for any of the data in the Uploaded category.
Now that the major questions seem a little settled, let us discuss a few nitpicks that are peculiar to Apple’s devices and their storage features. A lot of these are important as you mayn’t have realised a few of these things to-date. Take a break now, if you need to, else ensure you’re awake as you proceed. 1-2-3, pinch yourself. Awake? Good!
One of the features on the iPhone (and iPad) is that of ‘Optimise Storage‘, which was born of a good idea, but in the case of the very iCloud backbone being threatened, it can turn disastrous.
It basically means that in an attempt to save space on your device’s local storage, the photos on the device are automatically deleted once they are successfully backed up to the iCloud. What you’ll then see on your iPhone/iPad is actually only a thumbnail of the picture. It’s only when you click on the picture, you’ll notice the delay of a few seconds before that picture completely loads for you. This is because the device did not have the picture anymore; it accessed your iCloud account and retrieve the original, full-resolution picture real-time.
In the current predicament, if you lose the iCloud, then everything that has been backed up, and been auto-deleted from your device is lost as well.
If you’re preparing to survive Doom’s Day, you must then disable ‘iCloud Library’ on your iOS device, which will then prompt you to download all your iCloud photos onto your device. Say yes to the prompt on your iPhone. Let all the photos download (remember to only do this when you’re on Wi-Fi) before you go on to back up your iPhone to your computer.
You can always toggle ‘iCloud Library’ back on after the backup completes successfully.
Another thing peculiar to Apple devices is the auto-backup to iCloud feature. The last time your device was actually backed up to iCloud may actually be a very, very long time ago.
Most people set their devices to back up only when they are on Wi-Fi, the device is charging, and is left idle. So, we should be safe in saying that most people set their devices to back up at night, as they sleep. The backup should occur automatically every night. But the backups on Apple devices are usually too heavy – sure they’re iterative, but new data builds up on the device every single day. So, chances are, that when you wake up the next morning and leave home, the backup is still most likely incomplete. Away from Wi-Fi, it gets interrupted, and thus stalled. The (inconclusive) cycle repeats the next night. Which means that there are times your data is not backed up for days, or weeks, without you even realising it!
Would you believe me, that I, of Chip-Monks lineage, just realised that my iPhone 6s Plus was last backed up to iCloud one and half months ago?!
So you should regularly go check for when your device actually backed up the last time, and not just assume that the backup settings you enabled are working just fine!
Now that we have told you most of what could go wrong, we admit that we have been participants in the Doom’s Day prediction.
We have immense faith in Apple and are fairly confident in Apple’s repeated assertions that no hack of the kind has happened on their servers, and the fact that they’re “actively monitoring to prevent unauthorized access to user accounts”.
And we more or less agree, that a hack of this kind is a preposterous idea. Yet, we’d urge you be safe than sorry and go back up your device.
We also urge that you enable two-step verification on all your accounts and that you do not use the same passwords for all sites. Write to us at firstname.lastname@example.org if you need help or advice on securing yourself.
Meanwhile, we’ll stay tuned in for more status updates as the authorities and Apple release them.
Demonetization taught India how to live with Digital Payments. And while Paytm made a strong case for onboard (digital) wallets, it was still a bit of a convoluted process to first upload money to it’s Wallet and only then be able to spend the money.
Uber and others did allow the addition of credit cards to their apps, but even those required an OTP or password to be entered, to be able to spend money.
All of that said and done, India tasted digital payments and wanted more. Life had suddenly become easier – the need to visit an ATM, or a bank, or ask Dad for a loan of currency notes, was gone. Every merchant large and small was suddenly amenable to digital payments.
Plus, with Apple Pay and Samsung Pay already thriving in international markets, it was just a matter of time before they arrived here.
Samsung today launched it’s payments tool, called Samsung Pay in India, beating Apple in the race to reach Indian consumers in this new, burgeoning space. And it’s exciting, because it simplifies life, and because it involves a bit of magic (you’ll see)!
What does it do?
Samsung Pay is a new digital payment service that absorbs all your credit cards, debit cards, and electronic wallets into one umbrella, which you can then use via your Samsung smartphone or smartwatch.
In simple terms, it replaces your plastic cards for transactions that you’d have made through swipe machines. It does not work for Online payments (i.e. websites or apps) just yet – though that is conceivably only a matter of time.
So, the obvious question is, how does it work?
Well, for starters, if your phone is one of the devices listed below, you will have to first install a service update which should be available over the air. Just head to the Settings section of your Android device and check for updates.
Here are the devices that are currently able to work with Samsung Pay in India:
Once your device is updated, you’ll be able to connect your payment method to the Samsung Pay application. This can be a card (credit and debit cards) or an electronic wallet (like Paytm) which will be saved to the device post verification.
No, really, how does it work? Won’t all merchants need new machines – which means that it’ll take 15 years for Samsung Pay to become usable?
A lot of things kill the acceptance of new services – complexity (during set up or usage), the need for new hardware (at the merchant or user level) and limited acceptability (remember how many merchants gripe when you want to use your Amex?).
In fact, the reason Paytm succeeded was exactly because it skated around all of the hindrances – it was ubiquitous, tremendously easy to use, and most importantly, because everyone was happy accepting payments through it.
There’s some magic in Samsung Pay!
Samsung has been truly brilliant with their approach. Knowing fully well that India (in fact almost all countries in the world) would take many years to change the current credit card machines to become NFC-capable, Samsung created and patented a technology that actually enables the Samsung device (smartphone and smartwatch) to mimic a magnetic card (like your credit or debit card).
Called MST (for Magnetic Secure Transmission) this patented technology replicates a card swipe by wirelessly transmitting magnetic waves from the supported Samsung device to a regular card reader. So, MST turns virtually every card swipe machine in the world into a contactless payment receiver, without needing any additional hardware or software upgrades!
Not only does Samsung Pay work with MST, it even uses the more advanced NFC protocol (when the device is placed near an NFC reader). Unlike MST, NFC works via Radio waves and requires a specialised “receiver” in the receiving machine,
They are both secure transactions, and both do not need any “physical” connection with the payment receiving machine.
Samsung’s ingenuity of allowing both, MST and NFC enables almost all merchants across the globe to accept Samsung Pay, thus making it one of the most accepted mobile payment services on the market.
So… you should be able to use your Samsung Pay-capable device anywhere you like in India (and 11 other countries) starting today (though some merchants may not be aware of it for a while). Expect stares, incredulous looks, double checks and many questions from bystanders too!
To use Samsung Pay, once the merchant has input the amount to be paid on his credit card machine or NFC terminal, all you have to do is swipe up from the bottom of the screen on your device, choose one of your saved payment instruments and then bring your device close to the payment machine. The phone should automatically connect to the merchant’s machine, and you should be able to see a prompt on your device, indicating the demanded amount. All you then have to do is enter your PIN as if you were swiping a card, and hit “Pay”.
The machine should start spewing out the paper receipt shortly (post approval from your card issuer). That’s it, you’re done!
Which all card issuers honour Samsung Pay in India?
The service will be available for users of Visa, Mastercard, Amex and Rupay payment cards, for now.
Banks wise, ICICI, HDFC, Standard Chartered, SBI, Axis Bank cards are already supported. As is Paytm!
We’re hearing that UPI (Unified Payments Interface) and Citibank cards will soon be supported too.
Thus, this should be quite a functional service in metropolitan areas in a country like India.
Why you should use it.
First, there’s no need of taking out your card (and inadvertently leaving it behind at the merchant’s location) or even showing it to the waiter/cashier (since your card’s security number is visible at the back).
Second you don’t have to carry your wallet everywhere.
Third, in addition to the ease and comfort, the service also offers promotions from banks on rewards points and offers from Paytm as well.
There don’t seem to any additional charges that Samsung is levying for using the service.
The application also comes in with built in support, in case you are lost or need help with the use of the service.
All this makes for quite a tempting package!
What’s in it for Samsung?
The launch of Samsung Pay at this time can be expected to give Samsung the first-mover’s advantage in the Indian market – a market that is the second largest smartphone market in the world, and where the South Korean megabrand already has been the leader for quite a few years.
The service was first launched in South Korea in 2015 and is currently available in 12 countries including the US, China, Spain and Australia.
Yet, (and I particularly love this part) it took the company about two years to bring the service to India, despite the leverage the Indian market holds for the company. This was perhaps because the Indian market is still pretty traditional in its actual workings, and so are the concerns of the possible Indian users.
“We focused mainly on the barriers which were holding back people from going digital. We picked up the key themes centric to the Indian consumers — technical issues, security concerns and the lack of acceptability presence, and then integrated mobile wallets, UPI (Unified Payments Interface) and debit cards to Samsung Pay. The idea was to make in India for Indian consumers,” said Asim Warsi, Senior Vice President (Mobile Business), Samsung India.
What is noteworthy is that Samsung has actually been stretching itself pretty thin for bringing this service to the Indian market. They have worked to include debit cards, and electronic wallets within the Samsung Pay ecosystem, where these are not options that are available on an international level. Clearly, these have been integrated specifically keeping the Indian user and our market’s dynamics in mind.
Now that the Samsung Pay genie is out of the bottle, the next few months should tell us how the Indian market responds to Samsung’s hard work!
Go out tonight, give it a try, once you’ve set up your Samsung device for this new service! Me, I’m off, hunting for a store that’ll swap my Windows 10 phone, for that delectable Samsung Galaxy S7 edge! Or should I wait for Apple? Hmm…
WikiLeaks Reveals The CIA Hacked Into iPhones, Android Phones And Samsung TVs
Last week Wikileaks dropped a dossier of documents pertaining to some surveillance programs running within the CIA. The documents provided some shocking insights – the most shocking of them pertained to surveillance through Samsung Smart TVs. Other devices also mentioned were Apple’s iPhones and Google’s Android phones.
While in the world after Snowden, revelation of such information is not the most unexpected thing to happen, yet this incident does raise a lot of concerns at the same time.
The documents, dating from 2013 all the way to 2016, describe the agency’s abilities to use software flaws to hack into and control devices like the iPhone, Android, and Samsung TVs, along with Skype, Wi-Fi networks, and antivirus programs.
The document dump also shows that the CIA possesses the ability to hack into devices and remotely activate cameras, microphones and even the GPS, to keep tabs on a person’s location and… their surroundings.
Per these documents, the technology that the CIA is said to possess allows them unprecedented access to the compromised devices, almost as if they had a clone of the device with them.
It gets worse.
This access even compromises private messaging conducted via apps like Signal, WhatsApp, Telegram, Weibo and Confide by hacking the smartphones underlying the apps, to collect messaging and audio data before encryption is applied.
It is then, not any particular messaging service or application on the phone that these programs seem to be attacking, instead they’re attacking the underlying operating system on which the phone runs.
“These are not hacks against those apps, but hacks against the underlying operating systems”, said security technologist Bruce Schneier.
The sentiment was echoed on Twitter by Edward Snowden, infamous for his NSA leaks of a similar kind back in 2013. Now known as the Snowden Leak, the leaks reported on mass surveillance programs run by the NSA.
While the information revealed in both the cases is alarming, it is important to note that these two leaks differ significantly. The primary distinction between the two is that Snowden’s leaks revealed mass surveillance techniques that could be used to keep tabs on anyone and everyone at the same time. On the other hand, the recent leak reveals the existence of tools for individual surveillance, that have to be applied to specific people.
One of the programs revealed is called Weeping Angel. This program in particular, has raised many questions and concerns, due to two reasons.
First, it came into existence as a result of a collaborative effort between United States’ CIA, and the United Kingdom’s intelligence service MI6.
Second, it revealed what had not yet been considered a verified legitimate concern – while the use of smartphones and laptops for surveillance is something that has been suspected for years and has been proven many times over, but this leak revealed that programs exists that can leverage devices as innocuous as Smart TVs and use them for surveillance.
The hack employed by the CIA allows them to put a Smart TV on what they call a ‘fake off’ mode. Doing that makes it appear as though the TV is off, while at the same time, the microphone on the television could be used to record audio babble and conversations happening around the TV even in this dormant state.
What is unsettling is that this is precisely what conspiracy theorists have been warning us about for years now – the idea of a Smart TV being turned into something one can listen through comes directly from George Orwell’s 1984. A lot of 1984’s readers’ skin crawled at the prospect, but what allowed them to subsequently sleep at night was the belief that this “power” would stay exactly there – in a fictional pondering.
There were some people though who did stay with this conspiracy theory, but most of us never took those guys seriously.
But with this leak, one can no longer be sure how much of Orwell’s forecast was fictional latitude and how much was a prophecy. Now, the conspiracy theorists’ words are searing through peoples mind, scaring them with the new reality.
Add to this, the fact that Samsung, in their Terms and Conditions, states: “Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition”.
Soon after this revelation was discovered in the Samsung policy, they changed it, making a public statement saying that their Smart TVs do not record any conversation. But this obviously leads one to ask: What exactly are you up to Samsung?
There’s somewhat of a saving grace that I should point out right now, to restore some calm in your mind – even though the possibility of a Smart TV being used for surveillance is now very real and very dangerous, this particular program has not yet achieved the expertise needed.
The mole that feeds this program needs to be installed on the specific TV via a USB drive, and it can be disabled simply by unplugging the TV set. That makes it unsuitable for mass surveillance, which is the scenario that we have all been concerned about. For surveillance of particular people though, the “hackers” have hit quite a jackpot.
So, unless you suspect you’d be on the CIA’s list of people to monitor, you’re kind of safe, for the moment, at least.
The companies involved, when contacted, emphasized consumer security and privacy, but confirmed little else.
Apple said that it had already fixed a few of the issues mentioned in the documents via the latest OS updates, and Samsung and Microsoft, both said that they were looking into the reports.
There has yet been no evidence that these tools were actually used. What the documents assert is that the CIA has the technology to execute the kind of surveillance the documents detailed.
Predictably, the Central Intelligence Agency has refused to confirm the authenticity of the documents.
A question however still persists: how dangerous are these existing vulnerabilities in our gadgets, and should agencies like CIA be allowed to use them?
Privacy advocates and those concerned with security would certainly have a lot to say.
As Chip-Monks, I’d say just two things: don’t let others handle your devices (no matter how innocent the need) and do not fall for “free” apps especially from unknown/small-time developers, to the degree possible. There will need to be more stringent measures you need to take, but that’s meat for another article.
This time last year, Facebook poached Regina Dugan, an Advanced Technology stalwart from Google. Eyebrows were raised at this left-field hire.
Then, other news Facebook putting together an entire team and setting up what was called Facebook Building 8, had everyone befuddled. Confusion about what such high-end experts from the hardware research and development sector were going to do at Facebook, ran into several hundreds of barrels of print ink.
Shortly, things fell into place and it started becoming apparent that Facebook was going the Google and Microsoft way – using the gains from it’s primary business – digital media, to feed into it’s hardware research and development enterprise to fester and create supporting platforms that may at some point become their own lynchpins.
Building 8 was thus, the site for what could certainly be expected to noteworthy hardware advancements, a mecca of innovation and Facebook’s hotbed of hardware and next-gen ideas.
The initial questions, out of curiosity, were of course, many – What exactly was Facebook going to make a Building 8? When would we see any actual results?
Well, the ear-to-the-ground pipeline now has some details for us.
From what it looks like, Building 8 is quite similar to Google’s Advanced Technology and Projects Group, or ATAP. It is also not quite different from Google X, the lab where Google’s self-driving cars were born.
Even though Building 8 is hardly a year old, it seems like there might already be ready to show the world some teasers of what they have been up to so far. Word is that Building 8 is working on four advanced technology projects, each of which will play an important part in F8 – Facebook’s annual global developer conference coming up in April.
These projects reported span everything from cameras and augmented reality, to science fiction-like brain scanning technology.
Recent developments have suggested that one of these four projects involves cameras and augmented reality. Given that Facebook has been quite publicly and actively working on VR, this would not be a far-fetched move at all.
Another project is expected to revolve around drones – something that rival Snapchat has also been noticed to be experimenting with, not too long ago.
This supposition arise from Facebook’s hiring of Frank Dellaert, a robotics and computer vision expert, who was the chief scientist at Skydio, a small startup that is working on a yet-unreleased drone that can autonomously track a person while navigating through physical space.
Another project might involve brain scanning technology, or so goes the word. The hiring of a former John Hopkins neuroscientist who helped develop a mind-controlled prosthetic arm suggest towards something of the kind being experimented with at Facebook.
One of their projects might have medical applications – or so suggests Facebook’s hiring of an interventional cardiologist from Stanford, with expertise in early-stage medical device development.
The word also is that Building 8 might be developing a fifth unspecified project, and they are currently looking for the right person to lead such a project.
Amongst other noteworthy people who have recently joined Building 8 are Skydio’s former head of hardware, Stephen McClure, and Alex Granieri, who previously worked on Aquila, Facebook’s high-altitude drone designed to beam internet connectivity to the developing world.
What we find really intriguing is that all the project leaders within Building 8 get to work like mini-CEOs, such that they are assigned a timeline, and an idea to develop. Work apparently happens in a manner that these inventions/creations can either be shipped and sold as standalone products, or be spun out into a different part of Facebook.
Facebook’s interest outside of the digital media platform has been imminent for a while now.
We are all by now familiar with the internet.org efforts that the Silicon Valley giant has been making, to take the internet far and wide. Their efforts in this respect are comparable to that of Google, with the Google Loon project. They have also been working with VR lately, having brought on Oculus.
So, it’s now easy to understand that Building 8 is more like an addition to already existing efforts on Facebook’s part to expand into a varied amalgam of tech-related innovations.
The move to hardware is of course a fairly risky one for Facebook to make – a company that otherwise reigns as an internet giant, with its close-to-2-billion user base, and numerous products. What it also needs to be careful of is that it is taking on deep-pocketed competitors like Apple, Google, and upstarts such as Snap, in a cut-throat business defined by thin profit margins and complex logistics.
It would be interesting to see how it goes for them, and what Facebook brings to the table, and if any of these skunk-works actually are able to make a mark in their respective salvos.
Germany Considers 50 Million Euro Fines For Social Media Companies That Fail To Remove Hate Speech
The German Justice Ministry has recently introduced a new draft law that seeks to impose fines of up to €50 million (USD 53.2 million) on social media companies that fail to remove hate speech and other illegal content from their platforms, in due haste.
Under the proposed law, any obviously-illegal content would have to be deleted by the social media companies within 24 hours, and any material that is later determined to be illegal would have to be removed within seven days.
So, if this draft law is passed, if Facebook or other web company does not swiftly remove online threats, hate speech, or slanderous fake news, a fine can be imposed by Germany’s authorities. The amount of the fine and the frequency of it will perhaps be determined based on the gravity of the incident, and the company’s reaction to the complaint.
In addition to the possible fine, the Ministry has also asked social media companies to designate at least one contact person to be the key person in regards to dealing with such complaints. This person would also be personally liable to ensure that the regulations laid down by the Ministry are met with, and where companies fail to comply with the regulations, that person could face a fine of up to €5 million!
While it is true that the law, if and when passed, would be quite a step forward towards controlling the hate speech on social media platforms, but what is also true is that the mere fact that such a law was even drafted indicates how grave a problem hate speech (and the such like) have become for the real world. Such invectitude can’t be confined to social media platforms, or “things on the internet” though.
What becomes noteworthy then, are the numbers – how much hate speech and objectionable content is actually reported, and how much of it is removed by the companies, in the first place?!
Well, for now, the numbers for Twitter do not look too good – since Twitter reportedly only removes 1% of the reported objectionable content.
Facebook, in contrast, reportedly removed 39% of the reported content, while Youtube reportedly removed 90% of the flagged content.
In the matters of objectionable content, the speed of action also become important. If something objectionable is removed when a lot of people have already seen it, then there is no point in the “removal” of the item. What becomes noteworthy then, is that reportedly just 33% of the objection content that Facebook blocked or removed was done within 24 hours; none of the reported Twitter posts were removed within that time frame.
This new draft law, if passed, would then hold the companies far more accountable in this regard as well.
Germany’s stance on the regulation of hate speech and such has been quite strong for a while now. The country has increasingly pressured U.S. based tech companies to combat such material online much more aggressively.
In fact, it was back in 2015 that Germany influenced Facebook, Google, and Twitter to agree to reviewing and removing reported hate speech within 24 hours.
Of recent times, these platforms have been facing more and more flack given the recent onset of the Fake News problem on the internet. Consequently, Facebook introduced its Fake News filter in Germany as one of the first places to receive this intervention, amid concerns that disinformation campaigns could influence upcoming the nation’s elections.
In a statement, a Facebook spokesperson said: “We have clear rules against hate speech and work hard to keep it off our platform. We are committed to working with the government and our partners to address this societal issue. By the end of the year, over 700 people will be working on content review for Facebook in Berlin. We will look into the legislative proposal by the Federal Ministry of Justice”.
What is also noteworthy at this point, is that Germany poses quite a problem for the social media companies even though most of which are American and are used to the American standards of free speech.
Due to its Nazi past, Germany bans public Holocaust denial and any overt promotion of racism.
The situation in Germany has become even more complex now, with the onslaught that has been brought about by the recent migration due to the Middle Eastern Refugee problem. The politics of the country has become rife with even more complications, and has sparked a backlash among some Germans including a rise in online vitriol. Wanting to keep a cap on that is only natural at this point.
While the bill is still not a law yet, there is quite a good chance that it will soon be granted that status. As we wait to hear more on this, we must also contemplate on what other countries are doing in this respect, and is Germany being a little too harsh and/or paranoid, or is Germany just being practical because sometimes problems need hard stances before people actually decide to pick up the slack?!
You’d agree, audio/music from headsets, even the most expensive ones you’ve ever used, sounds flat. As flat as a piece of paper; when compared against what it sounds like in real life for you.
Let’s take an example, if you’re in an airplane, eyes closed and at peace, you’d hear the drone of the engine from one ear, the crackling of a wrapper from the other, the turn of a page, the step of someone in the aisle, and even the air coming from the overhead vent (despite it’s very subtle hiss). You may possibly even hear someone just smoothening his shirt over his belly as he leans back for a nap!
Both your ears would given you a very clear, discernible three-dimensional plane of sound, that you (thanks to your brain’s sound-interpretation algorithm) would be able to clearly understand and make peace with, as you nod off to sleep.
But, if I were to place a voice recorder in your lap that recorded all this ambient noise as you slept – when you later played that recording back using your favoured earphones, you’d actually only hear a jumble of noise, that while discernible, may at best sound two-planar and flat – nowhere as real-life as if you’d been awake to experience it first hand.
Here’s the kicker – it’s not the voice recorder, or your earphones, or even you brain’s fault – so it’s not the hardware so much as a technique in which audio is captured, that helps you fully enjoy audio in all it’s three-dimensional glory.
Despite all the development in materials and hardware, how binaural audio recording (i.e. audio that involves both your ears) can be recreated in realistic three-dimensional form has long been a dilemma for the industry.
Experiments and techniques abound, including the implementation of microphones embedded in some crazy fake ears. This has become a common way of recording binaural audio, but it’s not the only way, nor the best approach.
It’s akin to how most TVs today convert regular visuals into three-dimensional ones – artificially, digitally. And those are obviously not the same as movies recorded in the 3D! So, there has to be a way to record audio in 3D too.
Well, new Kickstarter product, OpenEars, from a company called Binauric could make recording of binaural audio easier than ever. OpenEars takes a novel approach of building microphones into in-ear headphones. And there’s another twist.
Many binaural microphones try to simulate the shape and density of the human head in order to reproduce the way sound actually reaches our ears. OpenEars sidesteps it by using your own head (we do mean physically, not metaphorically) and lets you simply place the microphones in the right spot.
So, if you like recording videos using your smartphone, this product could well be for you, as it’ll allow you (and your friends) to enjoy real-life video and audio recorded on a whim!
How does OpenEars enable that? Well, the Bluetooth headphones include microphones in the headphones, and a mode called HearThrough allows mixing in live sound from the environment, along with music you’re listening to if you want. This makes it safer to ride a bike while listening to music or performing any activity while enjoying audio through your earphones – allowing you to be fully aware of your surrounding environments, thereby mitigating any untoward surprises.
To me, this product feels inevitable.
Binaural mics that go into your ears have existed for a while and range from USD 60–500, however they can’t be used with most (maybe even all) smartphones, as the average microphone jack supports a mono signal, while stereo is a prerequisite for binaural recording.
I own a couple pairs of these, but I never carry them around because that would mean also carrying around something to record with, like a bulky H4n Zoom. Not so with OpenEars.
And you have the advantage of them being headphones also, so when you want to record something, they’re already in your ears. For a suggested retail price of around USD 225, this is just a little bit more than you would pay for a nice pair of in-ear binaural mics.
Today, binaural audio is mostly used in music, sound design, and niche YouTube communities. Making it easier to record 3D sound directly to your phone could open up the idea to a more mainstream audience. Imagine if every Snapchat you received was recorded in binaural! The immersive quality of 3D audio would literally add another dimension to video on social networks.
Just wait, the binaural wave is coming.
This isn’t Binauric’s first foray into speaker-mic hybrids. Its first product was a Bluetooth speaker and binaural microphone called Boom Boom. Although I haven’t tried OpenEars yet, I have friends who have been playing with Boom Boom and will vouch for both its sound quality and design.
Binauric says OpenEars will be compatible with GoPro cameras, potentially adding an aural dimension to POV extreme sport videos.
Binauric has even created special mics called OpenMics, which can be mounted on a helmet.
Binauric planned to ship to the first 500 backers by November with mass production scheduled for March, but it’s a Kickstarter product, so that may change at the drop of a hat.
One additional downside — because it’s using a unique Bluetooth protocol for processing high-quality stereo audio, it has to use a special app to record. The app is fine, but I want to use these mics for everything: Snapchat, Vine, Hyper-lapse, Instagram, FaceTime, Skype. So even if Binauric’s headphones pan out, my dream of binaural Snapchats is in the hands of phone and app makers who would have to work with this protocol, and maybe one day binaural can reach the masses.
Articles like this are dichotomous – especially for a website like Chip-Monks, whose reason for existence is devices, of which smartphones form the bedrock – however we have always questioned the incessant use of devices, as human kind seems to get more and more artificially-stimulated with each passing app.
Content seems to override real-world stimuli, Google Search replaces books, and skimming article headlines have replaced actual reading. Knowledge then, is limited to 140 characters.
But we aren’t bucking a trend here, just to be sensational. There is a genuine reason for pause, and contemplation.
Smartphones in today’s information-superhighway world have become more of a necessity than a luxury. Everyone’s dependent on their devices to get updates, carry out professional tasks, stay in touch with near and dear ones.
As we feel enriched by more connectedness, and swallow more content than we ever used to, one question we find ricocheting in people’s minds (usually those of parents) is – whether this incessant use of smartphones a healthy phenomena or not?
Well… the world is undoubtedly a smaller place, events create ripples internationally and people are significantly more aware and verbal. But there are psychological repercussions too.
Let me try and be more elucidatory, so we’re on the same page.
Several psychiatrists have been trying to establish what exactly it is that keeps us glued to our smartphones almost 24×7. One of the foremost in the field is Dr. David Greenfield, Assistant Clinical Professor of Psychiatry at the University of Connecticut School of Medicine, and founder of The Center for Internet and Technology Addiction.
“A mobile device is a portable dopamine pump“, observed Dr. David Greenfield. “Dopamine is a pleasure neurotransmitter in the mesolimbic reward circuitry of the brain, which is a primitive, old part of the brain”.
Internet addiction is not a new phenomena, and as Dr. Greenfield rightly stated, “Internet addiction is not a new thing, but we’re now seeing more people become addicted to different forms of content, like social media, or fan fiction, or online shopping, not to mention gambling, gaming and sex.”
And it’s not just limited to knowledge and social interactions. There’s some money to be lost too.
Dr. Greenfield further sheds light on online shopping – “Amazon is a perfect example. There’s very little threshold to cross before you click and buy something. It’s almost instantaneously rewarding. When you purchase something you get a hit (of dopamine) and you get a secondary hit once you receive it”.
Now you know why Amazon and others work so hard to create one-click-buy solutions – so you don’t really get a moment to stop and reconsider. They spend millions of dollars perfecting this gateway-to-heaven approach, and have you smitten with their ease-of-use story.
There’s more cause for concern.
Non-stop usage of smartphones can ruin your relationships with your partners, family members and colleagues. Now relationships, for example, might turn toxic due to phone-snubbing – the act of snubbing someone in a social setting by looking at your phone instead of paying attention to them, negatively influences your relationships.
A friend of mine once confided in me that immersing himself with his smartphone often helps him escape awkward situations. But this is no consolation for the long term damage it causes to our interpersonal relationships. My friend only got to see the other side of the equation when the tables were turned on him.
Not only are conversations a forgotten art, even those that do happen, revolve around updates, posts and photos on Facebook, birthdays and greetings are now Facebook posts instead of cards or personal calls, and suddenly online games are suddenly more fun than time with friends at an arcade or around a pool table.
And then there is the hunger for instant appreciation. Peoples’ lives have begun to revolve around the number of Likes and Shares that their social posts receive, and others’ measure of themselves is weighed by the number of birthday wishes they received online. The worst part of all of this, how, unconsciously, we’ve handed over the reins of our emotions to others; and become susceptible to the most ephemeral of slights.
Most of us must have seen a series of memes that are regularly posted on social media about how long we have to wait to get replies to our texts. Funny as it is in memes, however in reality, it’s not a joke. It pertains to another trait that has crept into our personalities (and how we view ourselves) due to our regular and constant use of smartphones – we expect one another to be available almost immediately and when we fail to receive a (timely) reply, this causes us an insurmountable amount of anxiety, stress and in some instances, even depression.
And, with the advent of online dating, we don’t even go out and to meet new people – since we can now meet them with the help of a mouse-click; is there a worse proof of our deteriorating relationships and communication skills?
Well, meditation, reportedly seems to be an efficient way to develop your strength when you are all by yourself. Try it – you should at least be able to talk with yourself occasionally – and meditation is the best way of speaking to yourself!
There’s another solution, a far easier one. Put your phone away. Consciously make the effort – the best way to do that is to leave your phone in the other room.
It would be unwise and irrational to suggest that one completely abstain from the use of smartphones, because let’s face it, smartphones (or their replacements) will exist everywhere. However, you do need regulate to and control your use of these smart phones (and tablets).
Be brave! Look up, talk, listen and share, in the real world!
Like it or not, Facebook has become one of the world’s largest media empires, helping circulate things, all things, faster than the speed of light. Sometimes, on a good day, Facebook can be faster than even Trump can tweet.
And no matter how much people may deny it, a huge portion of the human race is currently influenced by what they read on their Facebook News Feed – be it emotional videos, tech news, vacation spots, product launches, or simply “sensational news”. The fact that an algorithm and a huge web of friends, relatives and acquaintances are all wound together inextricably, means that a significant amount of what one gets to see, is popular because others in our trusted ‘circle of life’ considers something important.
Unfortunately, thanks to information overload and Notification Deluge, most of us now do not check other sources/websites to validate what we read on Facebook.
But there’s a nuance that most people miss.
Let’s take an example. Say you’re looking to buy a baby stroller, from a recognisable brand. Now, if you see a post from a friend about a stroller she got for her little one, you’d go check out that brand and that model on the brand’s website, right?
You might even FB Messenger or WhatsApp your friend directly, to check on her experience with the vehicle, and the store she got it from.
Additionally, a lot of us also make it a point to head over to Google and read experts’ reviews about the stroller. Yet more of us make it a point to then head to Amazon’s website, and check out real-world users’ unbiased and personal reviews of the stroller. We’d check the percentage of people who up-voted it, and those who disapproved of it. Then we’d read the write-ups (from the latter, especially).
Only once we were convinced we had enough knowledge about that particular stroller, would we make our purchase decision – primarily because it involves spending our hard earned money.
Well, most of us don’t do that for the News we read on Facebook!
Most people read the headline, glance at the photo and move ahead. But somewhere, the topic sticks to our synapses. Other, more conscientious people (or those troubled/impacted by the headline) actually read the entire article… on Facebook. And thus the seed is planted.
The proportion of people who actually head over to a bona fide news site to read more about the incident or to get another viewpoint on it, is infinitesimally small. So small in fact, that there aren’t any stats out there from any known/reputed research organizations.
The problem thus, is that in this day and age of content overload, Facebook is our biggest newspaper – personalised, real-time, and often, wrong.
People are misinformed easily, simply because till now, an algorithm singlehandedly decided how popular a story was, and then further promoted it as “Trending”, thereby pushing that particular snowball down the proverbial hill. The die is cast, and we unknowingly just participated in legitimising a fictional story, into a “reality” – one that could change the destiny of a nation, or of the world, simply because we propagated something and that, via our friends’ News Feed will convince them too. So the Trusted One often becomes the misinformant.
Now, multiply that by 1.86 billion (Facebook users). The world’s got a problem on it’s hand.
Slammed for it’s supposed role in promoting Fake News that supposedly swung the U.S. Presidential elections, Facebook, an American for-profit corporation needed to clear all its grounds of the slur.
So, the company now makes use of non-partisan third-party organisations to evaluate the factual accuracy of tales.
It has launched a much-hyped, crowdsourced Fake News crackdown initiative within the US. which allows everyday users to tag a post as “pretend information”, which is then evaluated by third-party organisations like Snopes and Politifact, for factual accuracy.
If the fact-checkers agree that the story is deceptive, it would appear in News Feeds with a “disputed” tag, together with a hyperlink to a corresponding article explaining why it could be false. These posts then seem decrease within the information feed and customers will obtain a warning prior to sharing the story.
There’s more: News is bubbling about the addition of a new Dislike Button, which will be placed in all posts, alongside the Like and Comment options.
Similar, determined efforts are being seen in parts of Europe amid the threats from the European Union to strike back hard on Facebook’s role in spreading misinformation. The social networking website recently revealed similar reality-checking partnerships in Germany and France ahead of the upcoming elections in the respective countries.
It is unclear how many individuals presently have entry to the “Fake News” debunking machinery, however we’ll know soon enough.
It is heartening to see that Facebook works so hard to fulfil it’s obligations (perceived and implied) and to ensure that mankind doesn’t suffer at the behest of some mischief mongers.
As I wind up, I must remind you, that Facebook doesn’t really bear this cross alone. You and I do too, because as we wrote earlier, we are ourselves propogating it by Liking, Sharing and Forwarding stuff without establishing it’s veracity. Worse, we’re building our own opinions without realising that we’re doing so.
So, as a friend recently advised me – “Stop headline-skimming; pause, read, reflect, and only then, contribute“
Finally Some Sanity - Huawei Agrees 6 GB of RAM Isn't Really Adding Any Value Over The 4 GB Version
Huawei is a considerably new player in the Indian market and it’s business is primarily garnered through budget smartphones.
It’s devices while affordable, are equipped with some of the latest features available to consumers worldwide. Having to keep up with the other budget phone manufacturers’ practice, Huawei too, is in a race to offer more memory – both in the form of RAM and internal storage capacities.
Which is precisely why it is developing a range of phones which shall offer both the 4 GB RAM + 64 GB of inbuilt storage configuration, as well as the 6 GB + 128 GB inbuilt storage configurations.
However, in a recent Weibo post, the COO of Huawei’s P-Series of smartphones, Lao Shi, unequivocally commented that the 4 GB RAM + 64 GB internal storage configuration is a better combination for consumers, than the 6 GB RAM + 128 GB inbuilt storage configuration for daily usage.
He followed that up by mentioning that the 6 GB RAM configuration, although psychologically more pleasing, costs much more than the 4 GB RAM configuration and the brunt of the elevated cost is ultimately borne by the end user herself.
This, according to him is an option that does not add much cream on top of the cake especially when it is to be utilized only for everyday (read: normal) usage.
It is difficult to understand whether to consider it as the executive’s personal stand or the company’s view, especially because what followed was an even more descriptive, radical and aggressive statement!
Lao Shi suggested that the competition between companies and manufacturers to provide more memory is akin the Cold War between the USA and the USSR, where both sides tried to achieve the top spot by increasing their nuclear weapon stockpiles.
Such a statement, coming from an executive of a budget phone manufacturer is unheard of!
Shi then went onto reassure the consumers by saying that their requirements shall be governed by personal choice and that those who wish to purchase the higher memory model are free to do so.
The current Huawei P9 comes in only two variants – 4 GB RAM + 64 GB ROM and a 3 GB RAM + 32 GB ROM configuration models respectively, while the Huawei P10, P10 Plus and Mate 9 Pro models offer a 6 GB RAM configuration.
The lack of much variety within the 6 GB nest may have been what prompted such a statement.
Irrespective of what were the causes for such an opinion, it is apparent that Huawei is not particularly keen on increasing the maximum memory available on its smartphones going ahead. Too much of R&D is unlikely to be committed towards the goal of adding more memory at affordable rates.
Huawei is reportedly looking to introduce models within the INR 10,000-20,000 range – which is the exact price range that Huawei has seen maximum growth in India.
Since the phones with the 6 GB configuration are most likely to be priced between the INR 16,000-20,000 rupees range, Huawei seems to intend to have a perfunctory presence in that playing field.
The company has promised not to compromise on quality and to retain it’s primary focus on improving the real performance-impacting specifications.
Which ones those would be, neither you nor I can tell at the moment – however, I for one, am glad that a manufacturer has the foresight and intellect to question “how much is too much“!
Prepping For 5G In India: Airtel Elects Nokia As It's Partner
There are usually only two positions in a race that are the hardest fought – the leader’s and the one who comes in last. All others are relegated to various badges implying their role of “also-ran” contenders, except perhaps to their near and dear ones.
Since a leader’s always under threat, hence it needs to do everything to stay ahead – wake up earlier in the day, practice harder, find the right coach to mentor, and most definitely plan and execute beautifully.
That’s what Airtel does well. Always.
No wonder it’s India’s largest telecom operator. By a long margin. Because it does things differently, and it starts doing them earlier in the day, than anybody else.
Many of you may remember Airtel being the world’s first operator to do something as bizarre and unheard of, as completely outsourcing it’s telecom network and infrastructure! Well, that helped Airtel stay ahead of network demands, be extremely scalable while at the same time helping keep it’s bottom line firmly ‘in the black’.
Now, as the world begins to define 5G and how it could be leveraged, Airtel has already lined up to work with Nokia to prepare itself for the next generation of communication technology.
Chip-Monks had written about Nokia’s imminent foray into India for 5G back in June of 2016! You can read it here
Under a new agreement, Nokia and Airtel will collaborate to drive the definition and development of new services, with a focus on taking the path to fifth generation network connectivity.
Not only that, the Airtel-Nokia duo will also prep India for the full-form arrival of the Internet of Things (IoT).
There’s more. Nokia will begin helping Airtel strengthen its existing 4G network – in terms of efficiency, improve operations and driving overall cost effectiveness. This will help Nokia get an in-depth picture of the network & infrastructure that Airtel currently rides on, which will then help the duo ensure Airtel’s readiness for the rollout of 5G, whenever that happens.
Before we go further, I need to remind you not to expect 5G services anytime soon.
Still being defined, and the charter still being drawn up, the expected time-frame for the extravagant global launch of 5G is sometime around 2019-2020.
Some of you have asked us what 5G really means.
5G promises to enable dramatic improvements in data speeds, reduction of the latency in the network and allow more ‘agility’ – such as the ability to enable new capabilities like ‘network slicing’.
Recently, international headlines were about the International Telecommunication Union (ITU) agreeing on what the final specifications of 5G would look like. The minimum download speed offered on the 5G network will be 100 Mbps and would be capable of supporting of over one million devices per square kilometers of geographic area!
5G specifications are being finalised keeping in mind the future where electronic devices will talk to each other over wireless networks, forming what’s called the Internet of Things aka IoT.
So this new-age network will allow telcos to support a growing number of customers and potentially billions of connected IoT devices with consistent Quality of Service, laying the foundation for smarter cities and rural communities, connected vehicles, industrial automation, remote healthcare and a myriad of business possibilities.
Simpler English? 5G will be faster than today’s wired broadband, and yet it’s a mobile technology. It will become the backbone of all your, and your household devices’ internet needs.
“Why 5G Already? We’re barely on 4G!”
Another question we get asked regularly. And err… we agree. It’s too soon to think 5G, when most of us still haven’t boarded the 4G train. In fact we wrote a (rather) forthright article saying the same thing a few days ago. You should read that too. It’s available here.
We haven’t changed our opinion. Indian telcos still need to improve their 3G and 4G performance, but standing at the cusp of the IoT revolution as we are, we (the country) need to be ready. For the economy’s sake as well as for the sake of being future-proof.
While India is getting used to 4G LTE, telecom operators are leaving no stone unturned to make sure that they are ready for the 5G Networks. Even the Government of India is being very supportive to the effort.
The Telecom Ministry has expressed its interest in being an early adopter of 5G technology. One of the most spoken about telcos, and Airtel’s chief rivals, Reliance Jio is reportedly teaming up with Samsung for it’s 5G network planning, and has even claimed that they already have the fibber infrastructure in place to support it.
Nokia’s AirScale solution allows telecom services providers to scale their networks to freely add subscribers while keeping latency (connectivity to the backend servers etc.) at an imperceptible level. Airtel will make use of this technology from Nokia to stay ahead and deliver a class-differential in user experience.
Abhay Savargaonkar, Director, Network Services at Bharti Airtel, said – “Airtel has always been a pioneer in rolling out the latest technologies to deliver a superior experience to its customers. 5G and IoT applications have tremendous potential to transform lives and we are pleased to partner with Nokia to enable these future technologies for our customers.”
Sanjay Malik, Head of India Market, Nokia, said: “After our successful association with Bharti Airtel for 2G, 3G and 4G technologies, we are proud to partner to prepare for the future of mobile networks. We will leverage our global experience in 5G-related industry projects and collaborations to enable Bharti Airtel to prepare their networks for greater capacity, coverage and speed“.
It’s definitely interesting to see how the telecom operators in India are eyeing the next generation of communications, and gearing up for it.
We expect the rollout of 5G to be way quicker than the 3G and 4G services in India! We are cheeky, we know, but then that’s what drives brands to improve – aggressive customer expectations!
Tired Of The Noise Of Facebook And Twitter? Escape To Raftr, To Talk And Discuss
I know a lot of people who are bored of Facebook (and Twitter), but are forced to keep visiting it as they’ve become their major source of News and Current Affairs.
The problem is, this stuff which serves for their intellectual hunger is buried deep within a mass of unending and irrelevant posts about cats, cakes, local business promotions and what not. As a result, users end up signing up for a raft of different apps and services to read intellectually stimulating stuff, or even regular everyday stuff like sports and TV shows.
The problem most of users now face, is that there are many, many sources of news, but not enough places where discussions can fester. Where opinions and thoughts about news and affairs of import can be shared and ‘cast.
Created by Yahoo’s former President, Sue Decker, Raftr is a new startup that might become your retreat from all the noise on current social media platforms. It encourages the user to follow topics of her interest – like sports, news, and TV, rather than people.
In fact, Raftr is intended to discourage abuse and self-promotional noise. One way in which Raftr accomplishes this is by tying user accounts to phone numbers and not email addresses, which in turn makes it difficult for the user to register multiple times as is possible with other platforms these days.
Decker and one of Raftr’s investors, Michael Dearing (founder of Harrison Metal, a venture capital firm) explain:
“Using Raftr is like going to a really great dinner party where there’s little rooms talking about different topics and you can move from room to room, but you know that if you go into the ‘White House discussion room,’ there’s going to be some people who take this seriously and want to hear from others,” Decker said. “It’s not a shouting fest, it’s not megaphones. It’s a conversation.”
Users have the freedom to choose the topics that they want to follow and see what people are posting about them in a typical feed manner or on the topic page.
Where did this all start from?
Well, from Decker’s own disappointment with Yahoo – for the reason that it (Yahoo) failed to tap into its main asset, News and Current Affairs.
It failed to transmute its media content into something that people could consume easily on their smartphones and share it with their friends.
Decker wants to move in to the gap. She’s created a space where superfans can discuss about anything they like from their favourite TV shows like “Stranger Things” to any recent political issue, without any unnecessary clutter.
The way we see it, Raftr will occupy a middle ground between a fan forum and checking a hashtag on Twitter.
Speaking of other social networks, Decker said, “They typically start more general and then more specific ones crop up, to address a specific interest…Once the more general ones get so broad, it’s hard to find what you’re looking for”.
There’s more to Raftr though.
Other than posts made by users, Raftr offers something else as well – it’s own editorial staff that will deliver one blog post a week on each topic.
The entire team is driven by Decker’s view that content should be used to amp up the conversation, not by a means to an end.
To that end, Raftr’s been created to allow users to talk about an event in private, with an individual or custom groups using chat-room type functionality, if they so prefer.
This way, users can also end up finding new like-minded friends.
Raftr as of now, seems to have mapped out its source of revenue – which is a slight departure from conventional methods. Others rely on ads or VC money, but Raftr (at least in the initial stages) plans to make money via the editorial content that Raftr’s team produces, rahter than by dropping ads onto your feed.
The question here is will Raftr be able to make a difference or make its presence felt in the social media market?
Thing is, this is no rocket science, so it’s nothing very unique either – if you’re already deeply engaged in a fan community, or subreddit, then Raftr isn’t likely to make a huge change in your activity pattern.
But if you are a diehard who wants to go somewhere to talk about the shows you love or news you’re concerned about, Raftr does at least at first glance, feel easier to navigate that stepping into the firehose of Twitter or Facebook!
Having been on Raftr for a while now, I can tell you – it’s a nice place to hang out and talk – it’s a lot like friends sitting around the (virtual) table at home or in a quiet coffee shop and chatting up. It doesn’t feel like you’re trying to yell over the din of a pub (which is what Facebook’s kind of become these days).
But Raftr’s not there yet, entirely. A lot more content needs to start flowing to really get me interested and coming back several times a day – but that’s a factor of the number of users on the platform (which should be growing significantly over the coming months), and I need to contribute too. As a platform and an app, Raftr needs to land in the zone between the intricacies of Reddit and Tumblr, and the effortlessness of Facebook.
That said, I believe Raftr’s going to gain steam really soon. You should go check it out and start expressing yourself. User Generated Content 101!
On a scale from 1 to 10, where 1 is “Don’t give a darn” and 10 is “I could even ensure that if I have to”, how insignificant is the Terms and Conditions column for you (for any product you buy, or service you subscribe to)?
Your answer be 2 or 9 or anything else, by the time you finish reading this article, we’re going to attempt to turn you into a highly-alert commando who has a general mistrust of a T&C document and who won’t sleep too well, having passing over the terms and conditions section or anything hereafter!
The Internet might already seem like a crazy universe to you – full of fascinating stuff, inane stuff, and some downright absurd things too.. but you may have already heard that it also has its dark side.
The bad news is, our very very dear smartphones, too have a dark side!
Now a vital part of our life – right from waking you up in the morning to paying for your coffee, from sending confidential emails to making transactions worth thousands – almost everything is done via our phone.
And such a powerful device thrives thanks to the superpowers bestowed to it by apps that are built by the millions for any and every task. Thanks to the insane number of apps being used today, developers are the new messiahs.
But we need to address the elephant in the room – how safe and secure are these apps?
Amidst the ever-growing demand for freshly brewed apps and exponentially-inflating competition in the app-developer market, most developers are pressed for time and need to hit the Store shelves before competition beats them to the punch.
Thus, they often take the security feature of their application very lightly – intending to return to it later, but in this process, they jeopardise the device’s user.
Security is often not the primary concerns of the app developers, for a lion’s share of the apps available on Play Store, AppStore, and others, are click-bait content that often lures in their audience with fancy misleading information thrown in advertisements.
Often an average user, does not read the disclaimer that pops up before signing up for the product, thus missing out on the major chunk of the security and privacy breach warnings given out in a very subtle and placating format.
The bitter reality of the situation is though, that the user cannot contribute or enhance security of the app, even if they wish to. Thus, being cautious is their only option.
In layman’s terms, Apps are nothing short of helpers. By that very nature, they can get acquainted with your routine and empower themselves to derive information even without your knowledge. There have been cases of users’ personal data being breached – like their gender, age, phone number, location and other potential information – which is later collated and sold.
You shouldn’t be surprised to learn that in-app ads in smartphones are one of the key players in this harvesting of data.
There are many shady apps available on app stores, that are designed to retrieve the unique ID number of each phone. Eventually, personal information given out during registrations for apps is matched with the Unique ID thus compiling a full-fledged profile of the user is compiled, which is then sold to companies, for marketing purposes.
What makes this even more convoluted is that App developers voluntarily accept in-app ads, for monetary gains. Frequent usage of a particular app provides information about a user’s likes and dislikes, thus creating a bait for in-app-ad companies to advertise products in the likes of the user.
None of this is fair. And the fact that it is unknown to most users only makes the matters worse. Security should be the major agenda of any and all app developers.
Banking apps are often the favorites of any hacker. This is obviously, thanks to the financial gain at play. But those aren’t the only targets. There are many more.
Targeting of applications for data can be done in various ways.
One would be the example of WhatsApp, the messenger service run by Facebook that recently switched to a 256-bit encryption which promises 100% security to its user and the conversation made. The exchanges that happen over this supposedly secure system are backed up to a server online and reside there for a period much longer than they would in your device. This results in automatic storage of a user’s data on a server, which has it’s own security problems going on.
This kind of storage can also be on a cloud. One exhibit of this is the automatic storage of User’s data on iCloud (for iOS users) and Google platform (for Android users). Every scrap of information and data generated using the smartphone is automatically backed-up into these storage platforms.
However, these supposedly-safe platforms for the storage of data have been proven unsafe.. Take for instance the leak of private images of Actress Jennifer Lawrence from her iCloud account. Following the leak, a wave of such cases was reported and Apple had to take measures to make the storage platform more secure and strong.
A recent experiment by a team of experts at Jots, “tested 110 popular, free Android and iOS apps to look for apps that shared personal, behavioral, and location data with third parties”.
The results were quite alarming and bizarre.
73% of Android apps shared personal information such as email address with third parties, and 47% of iOS apps shared geo-coordinates and other location data with third parties. That is almost three-fourth of the android apps and about half of the iOS apps that have been caught adding to this menace.
Reports said that an alarming 93% of the tested apps were connected to a hideous domain, safemovedm.com. Chances are, these stats are the mere tip of the iceberg.
Apple maybe the epitome of quality and safety, but even with such advanced technology as Google may possess, there seem to be gaps. Compared to iOS, Google’s Play Store does not have an impressive track record – and that stems from the fact that unlike Apple’s grit and determine there have been no sustained steps or procedures on Google’s part to check the relevance and safety of Apps before making them available in the Google Play Store.
This could probably be because unlike Apple, Google does not have many filters or strict controlling system that app developers need to clear before officially having their app in the store. Android apps are available even on uncertified platforms. Since Apple’s App Store is a centralized point of distribution, it provides users with confidence that the apps they download have been tested, certified and validated by Apple. Therefore, Apple’s App Store is near-100% malware-free and invulnerable to viruses.
Perhaps you’d now ask how is all this not illegal, and how do they keep doing it? Well, it is not illegal as long as they (app developers) put their data sharing or data mining intent somewhere in the fine print of the Terms and Conditions of the application.
Yes, the same one we barely pay any attention to! So, even though we might love the idea of sitting back with the how could they do this attitude, the onus of it also lies partially on us, our choice to be ignorant, and letting ourselves be abused.
So I’m advising that you stop believing the poster-boy persona that these companies keep putting out, look past the gloss! Wouldn’t you rather be safe, and have your privacy, than be blissfully unaware?!
So, the next time, prior to downloading an app, remember to:
You'd Be Surprised To Learn What Shook Amazon's Cloud Service And The Internet Last Week
You might be familiar with Amazon primarily for its online retail website that sells almost anything and everything you could think of.
However, Amazon also owns, manages and runs a gargantuan suite of services called Amazon Web Services, one of which is a web hosting service called Amazon S3, which stands for Simple Storage Service.
The Amazon S3 acts as a storage facility for some of the largest websites and platforms in the world.
As an aside, would you believe, that all of Netflix, AirBNB, BMW and even a lot of NASA’s stuff is housed within the AWS?! In fact till recently, AWS also played host to all of Apple’s iCloud data (you’ll recall our article from May 2016 citing Apple’s imminent movement of iCloud data to Google’s servers, from AWS).
Clearly AWS has succeeded over competing platforms, thanks largely to it’s reliability and scalability (and not so much it’s pricing). Well, the reliability meter just plummeted recently.
Last week, a lot of websites in the U.S. suffered outages as the key supporting structure – that of storage via the Amazon S3 hiccuped.
So, what exactly happened and why is this even making the News?
To understand the magnitude of the problem, it is essential to know the nature of service provided by Amazon S3, and S3’s importance to various websites.
Going back to AWS’ client base, we need to add more to the list for you to get the proper perspective. Amazon S3 is used by more than 120,000 prominent domains across the world including some major websites like Quora, Giphy, Instagram, IMDb, American Airlines, Imgur, and Slack.
So when the Storage backbone goes down, you can about imagine the impact of the outage to the developed world, which forms a significant portion of AWS’ client base. While some websites went completely offline, some faced slowdowns and others broke partially with only a certain subset of services being unavailable.
As an example, in case of Slack, the users couldn’t upload files to their group chats for some time but the chat was working as it were. This clearly points out to the fact that the impact of the problem wasn’t consistent nor the outage comprehensive.
For the initial hour, Amazon’s status dashboard wasn’t even showing a problem. This in turn is slightly amusing because the dashboard itself relies on the S3 for its functioning. But later on, the company did acknowledge the issues and moved in swiftly to address the issue. They later explained the reasons for the outage in a detailed post.
S3’s subsystems obviously play a vital role, but one of them more vital than others, as one particular subsystem “manages the metadata and location information of all S3 objects in the region” (as Amazon said).
In the absence of this subsystem, services that rely on it can’t undertake even basic data retrieval and storage tasks.
Amazon explained that the S3 itself is constructed in such a way that it can afford to lose many-a-server at any time and still function normally; but that didn’t cover the eventuality of the critical hub that held all the location information for all the other systems, itself getting knocked out of commission.
Well, the kicker in the story is that the mishap that caused the subsystem to tumble, shaking the internet was due to…. a small typo!
“Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended”, Amazon said. “The servers that were inadvertently removed supported two other S3 subsystems”.
What’s hidden in the sub-text there is that something as simple as an unfortunate typographical error could cause servers to be removed altogether. Imagine – a muscular error or plain haste in unchecking a box that ought not have been unchecked (or something as mundane as that) causing a denial-of-service of such magnitude!
I’d hate to be that poor soul whose muscular infraction caused the hullabaloo! I just hope he’s not been sent to dredge Lake Michigan using a bottle cap!
Anyway, back to the story. This web of problems (pun unintended) ensnared the S3 as it faced trouble while handling the massive restart.
The company in this regard explained, “S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected”.
Since this outage that started around 12:35 pm Eastern Standard Time (in the U.S.), and impacted primarily American companies, the mid-day crisis managed to create huge ripples in the Internet Ocean.
Thus people commented, and reacted, and re-tweeted and second-guessed with gay abandon.
Dave Bartoletti, a Cloud Analyst with Forrester, said, “This is a pretty big outage…AWS had not had a lot of outages and when they happen, they’re famous. People still talk about the one in September of 2015 that lasted five hours“.
Amazon, however quickly began mitigation efforts and were able to claw back from the outage in fairly record time, once they were able to identify and climb aboard the offending server.
Being a customer-centric brand, they even put up a public apology openly acknowledging the issue – “We want to apologize for the impact this event caused for our customers…We will do everything we can to learn from this event and use it to improve our availability even further”.
I can almost sense the calm serenity that surrounds the sea prior to a storm, in that statement.
Knowing Amazon as well as we do at Chip-Monks, we know that their services are top-notch and they have almost never been responsible directly or indirectly for such an outage prior to this incident. We also know firsthand, that Amazon (and AWS’) first and foremost mission is their customer’s sanity and continuity of business. So, I am ultra-confident that Amazon would have learnt from this, and already have taken steps to prevent anything or anyone from having such fundamental access and hence the capability to sink the ship, ever again.
We at Chip-Monks are extremely unbiased as we are as a gang, yet we can unequivocally, and unanimously agree say that we swear by AWS, it’s processes and it’s systems. They already are well structured and possess multiple redundancies for everything; and this outage while unfortunate and avoidable, would’ve taught them what 20 years of boardroom discussions couldn’t. And AWS would be better for it.
In a ZeniMax-Oculus kind of spat, Alphabet Inc., the owner of Google has claimed that one of the top engineers in its self-driving car program decamped with thousands of confidential files, including designs, in order to help him start the self-driving truck company Otto.
He then proceeded to quickly sell this company to Uber who in turn took advantage of Otto’s technology.
Uber has denied any of these claims.
The self-driving car unit Waymo, a subsidiary of Alphabet, has filed a lawsuit jolting the fast-growing and highly competitive autonomous vehicle industry.
The confrontation was a long time in the making: the complex relationship between the companies was tense from the start. According to various people familiar with the situation, things between the companies soured further as they increasingly competed with each other on many levels.
Bill Maris and David Krane of Google Ventures (GV), the firm’s venture capital arm, had provided substantial funding to Uber in its early drive to raise capital in 2013, despite pushback from the rest of Google, since it already had an investment in competitor Sidecar.
Maris and Krane prevailed and the deal is now regarded as GV’s greatest success. On paper, the firm’s initial 2013 investment of USD 258 million gained about 14 times its value over the next three years to more than USD 3.5 billion.
Now, if the Waymo suit damages Uber, Google Ventures’ investment in the ride-hailing company stands to go down as a Silicon Valley rarity: a large funding deal undermined by the firm’s own investors.
“Whatever Waymo gains, Google Ventures loses”, said Stephen Diamond, associate professor of law at Santa Clara University.
The lawsuit is just one of the many in a series of recent public setbacks for Uber, including allegations of sexual harassment that prompted an internal investigation, a video of Chief Executive Travis Kalanick arguing with an Uber driver that led him to make a public apology and Uber’s admission on Friday, that it used a secret tracking tool to avoid authorities.
“We have reviewed Waymo’s claims and determined them to be a baseless attempt to slow down a competitor and we look forward to vigorously defending against them in court”, Uber said in a statement in response to the lawsuit. “In the meantime, we will continue our hard work to bring self-driving benefits to the world”.
A spokeswoman for Google Ventures declined to comment.
Uber’s aggressive culture was the subject of many conversations at Google Ventures, a source close to the transaction said. It’s even more ironical that in hoping to influence the startup, the venture firm at first encouraged a flow of talent from Google to Uber!
Things are tense, the industry is watching and Uber being under the pump is not good for anyone – there’s too much money riding on Uber, and it’s expansion plans are just about starting to bear fruit. On another level, a lot of what Uber is doing is appreciable – it’s allowing people to travel differently, more confidently, and extremely comfortably. So it’s solving real world problems.
Here’s an example of the kind of things Uber is doing, which is actually helping change lives:
A deal with Barts Health NHS Trust in London will see patients using Uber for journeys including hospital appointments and generally getting out-and-about when they might otherwise be housebound or reliant on family and friends.
The firm will develop and use the UberAssist disabled access cars and the UberWav service for wheelchair users.
Services will also be available to carers, using the app alongside traditional forms of transport to determine the most efficient means for moving people.
NHS patients with illnesses ranging from cancer to dementia will be looked after by Cera carers under the new London scheme, which uses a smartphone app to coordinate care, book drivers and keep relatives informed of their care.
Dr Ben Maruthappu, Cera’s co-founder and president, said the move would “radically integrate care and transport through technology”, adding, “Older people and those with disabilities will now have access to the highest quality drivers, while carers will be able to efficiently travel to ensure they can provide services in the right place at the right time”.
So, clearly, Uber’s doing good things via it’s services, so will Otto as it blooms in the coming years.
We just hope that Uber gets itself out of all these messes that are undoubtedly disturbing it’s management and distracting it from it’s ability to do good.
Siri’s had several brain transplants!
It wasn’t done in a day or a week or over a few months. Almost since the day Apple introduced its voice assistant back in October 2011, Siri has undergone an almost continual series of brain transplants that shifted its silicon-powered mind from pure Artificial Intelligence to AI powered, in part, by machine learning.
Apple recently shared its perspectives on artificial intelligence and where it fits in the Apple ecosystem, which is, apparently, everywhere.
Another question the team at Apple ponders on is how AI can be grown while respecting users’ privacy. In particular though, they focused on how the introduction of machine learning could transform its now five-year-old digital assistant.
Machine learning is considered a toolset within AI – it’s a way of building Siri’s ability to respond to conversational queries. Siri learns concepts by being fed endless numbers of examples. In other words, Siri will understand how you might ask a question about direction, not so much by having every possible permutation of mapping questions, but by recognizing what a map question sounds like, based on all the other examples it’s been fed.
In Siri’s case, the core technology behind the assistant is 100% different than what consumers encountered on the iPhone 4S five years ago. It has gone from a rules-based system to machine learning and voice recognition.
Most users were oblivious to the changes, which might be considered a kind of victory, while others, Apple said, noted a distinct improvement in Siri’s ability to understand natural language.
Apple’s interest in artificial intelligence didn’t spring forth out of the ether in 2011. Almost 25 years ago, a relatively simple form of AI appeared on Apple’s Newton, the first PDA. That groundbreaking product ultimately failed, but it had its moment.
I remember when a former publication, PC Magazine, lauded the mobile device for its trainable handwriting recognition. Apple continued to work on AI-infused technologies for years, but the introduction of Siri in 2011 served as a sort of inflection point, quickly becoming the most visible part of Apple’s AI work. Even so, Siri is far from alone in Apple’s current AI strategy.
Earlier this month, Apple CEO Tim Cook told the Nikkei Asian Review that AI is “horizontal in nature, running across all products“. More importantly, it’s already being used by Apple “in ways that most people don’t even think about“.
Behind the scenes, Apple’s AI works to manage product battery life based on usage patterns and what Apple has learned broadly about battery usage to manage power consumption at a component level. The facial recognition in Photos is also powered by AI. It’s even at work on the iPad Pro to ignore errant swipes of hand or Apple Pencil.
Sounds simple, but to do something like that, the system must understand the user’s intention, which can vary.
New Brain, Better Thoughts
When Apple started using machine learning, they saw a dramatic improvement in Siri’s speech recognition, especially accents and also vastly improved was Siri’s ability to understand speech in the presence of background noise.
Even so, Siri suffers from the same issue as other voice assistants: It can’t hold a conversation.
Yes, Apple spends lot of time building personality (ask Siri if it’s AI and it’ll respond, “Sorry, I’ve been advised not to discuss my existential existence”) and cultural intelligence into the AI, and Siri can fake it — to a point.
Ask Siri if you need an umbrella today and it’ll give you the weather forecast and if you immediately ask her “What about tomorrow?“, it’ll know you’re still talking about the weather and possibility of rain and give the right response.
Context-wise, it’s impressive, but Siri still falls far short of the give-and-take necessary for an actual conversation.
However, it’s worth remembering that Apple introduced the term “voice assistant” to the digital lexicon (much like it did Personal Digital Assistant decades ago), and it takes that term seriously.
I can almost hear the mirth running around in your head, but I’m serious. There’s a lot that that Apple’s doing for Siri and it’s ability to help you.
Future versions of Siri may do far more than just engage in time-burning chit-chat. A true assistant can be proactive. The current version will tell you, based on traffic conditions, when you need to leave to make an appointment. Eventually, Siri might start to connect the dots on, say, the state of the phone and how far you must travel and tell you to charge up before you leave. Of course, Siri’s ability to grow may be somewhat limited based on one of Apple’s core principles: user privacy.
Google’s impressive intelligence and increasingly proactive nature is largely based upon its Knowledge Graph and what it knows about you (and billions of other people) and the relatively persistent user profile that travels with you from Chrome login to Chrome login. Apple on the other hand, does nothing of the sort. In fact, Apple insists that its brand of AI doesn’t need to build a profile of you to work and they don’t have an economic incentive to do so.
Apple can get away with ignoring your personal data because it’s not trying to deliver contextual advertising to you. Of course, Apple sells hardware while Google sells (recent hardware releases not withstanding) primarily contextual advertising driven by user data.
Apple sells millions of iPhones, iPads and Macs each quarter and has an exploding services business, which means Apple can get away with ignoring your personal data because it’s not trying to deliver contextual advertising to you.
While Google’s intelligence and AI-powered responses come from Google’s servers, Apple generates most of Siri’s intelligence locally. The company trains the AI in the cloud, where, Apple said, it’s getting 2 billion queries a week, and then delivers that intelligence to each Siri-hosting Apple device (these are the occasional brain transplants). Those devices then apply that intelligence to your locally stored data.
More interesting, though, is that Apple also does some machine learning on your iPhone. Apple believes it has the advantage here over competitors because it designs its own chips and contends that it’s significantly ahead of others in the mobile technology space,
Unlike Google and Amazon (parent of the voice assistant Alexa), Apple designs both the software and hardware – a strategy it believes gives it an advantage, including the ability to do neural processing at the silicon level on devices as small as the Apple Watch.
Apple’s approach to AI ‘is a laudable’
“I think that there’s real-world proof about being able to go do distributed machine learning without every node in the cluster having access to all the data”, McClellan added, noting that it is quite possible to do consensus-based artificial intelligence with more anonymous data.
Even as McClellan gives Apple high marks for its approach to data, he wonders about Apple’s lack of participation in the newly formed Partnership on Artificial Intelligence, which counts IBM, Google, Facebook and Amazon among its members: “It feels like Apple should be more open, in general”.
How far Apple will go without being more open and joining other companies in their efforts to keep AI technologies from getting away from their masters, and how smart an AI can truly become without building customer profiles, are fair and open questions.
For now, at least, this is the path Apple chosen for its brand of AI, and one thing is clear: The Siri you’re using now will undergo further brain transplants and be far different that the Siri you use five years from now.
Betting On The Winning Horse - Google To Invest In Indian Startups
In order to survive, a company has to continually find greener pastures. Even if that company is Google!
And, Google has always been one of the best scouts for potential growth opportunities. Forever searching for it’s next big canvas, there’s practically no corner of the globe that Google doesn’t probe for avenues to invest in, or get involved with.
And India’s been big on Google’s radar for many years now.
Now, there’s something interesting on it’s scope, and Google seems to be moving in.
According to three sources directly involved with the matter, Google has begun scanning India’s startup and venture capital horizon. Intentions seem to be to head into some acquisitions – a first for India.
India has been one of the most trusted technological and customer pools in the world for almost a decade, it is filled with potential. With its growing population that are touted as the next billion internet users of the world, Indian startups are in a business sweet-spot.
And as a direct investor or acquirer, Google is looking for companies it can play Big Brother to, and guide towards an even more infinite future than the startup may have envisioned.
According to one of the sources, ”Two major areas of focus for them are products for the next billion users and acqui-hires in high-tech areas”.
Although the source quickly added, that a transaction is not imminent yet.
Another source, an investor, added that Google with its almost unlimited potential to invest, was highly unlikely to invest in consumer-internet companies such as online retailers.
That actually is quite logical, given that online retailing is one of those business where there is already a presence of competitors such as Amazon and Flipkart. Investing in any other online retailer would be a highly risky and potentially dead proposition.
As part of building its Indian base, Google recently hired Seema Rao (who was a Vice President at investment bank Avendus Capital) to lead corporate development, or the acquisitions and strategic investments department in India and Southeast Asia.
As a developing nation, India offers an almost carte blanche opportunity to the company. Google would be surely looking to tap the investment potential of small cities and towns, where a major share of the internet base resides.
According to the sources, who didn’t wish to be named, given that the strategy is still in its nascent stage, the company could be making a major bet on strategic investments and/or acquisition of technology-enabled firms in the financial services, healthcare, education or mobile utilities sectors.
Education and healthcare are one of those two areas, where a major investment is much needed. The state involvement in these two sectors, in terms of monetary value is quite lacking, and a potential investment from Google might just give a new lease.
As for the aqui-hires, where the company buys another not for its monetary assets but its employees, Google might be hiring cyber-security startups and cloud computing.
Google has, over the years emerged as one of the most predatory companies in terms of acquisition – it’s already acquired more than 70 companies via it’s venture capital surrogate, CapitalG (in fact, CapitalG which was in the news recently, for financing 15 million dollars to education technology startup Cuemath).
Yet it (Google) is still to acquire a single startup in India. But we’d like to think that that’s because Google was busy wrapping up other countries before it got to the Big Mamma that is India.
India already accounts for the largest number of users in terms of Android and is a key test market for Youtube offline, hence Google already knows to respect us!
Well, the economy and most definitely, the startup ecosystem could do with professional entities investing time and effort into this fertile space, because there are more nuggets here, than are duds!
Mark Zuckerberg Wants Facebook To Have More Power In Our Lives. And We Should Resist.
Mark Zuckerberg, the founder of the social media platform that changed the world, recently put out a manifesto in a Facebook post. The manifesto was personal and business related, and there were two key points to it:
It admitted that there are certain larger problems in the world that have been caused by Facebook (and the likes of it), while at the same time it (the manifesto) seems to be propagating that the solution to it is more Facebook.
As Facebook has grown, it has acquired a progressively more important position in our daily lives with each passing year. And with that, it has procured the power that comes with it.
For a huge population of the world, Facebook is a primary source of media and news consumption. And since Facebook’s algorithms are the ones deciding what user gets to see what Facebook then has, is the power, as well as the responsibility, to influence decisions about the media world its users live in.
With the recent hullaballoo around Fake News and Facebook’s key involvement in propagating it, a lot of alarming figures came to light.
The social media platform has over 1.8 billion users, which is in the ballpark of one-fourth the population of the world.
Now when we ask how many people actually get their news from social media platforms like Facebook, the answers tend to be varied, since a calculation of this kind if next to impossible.
From all the sources that have tried to get an estimate of that number, we know that about 44% of the adults in the world get their news from social media, while the number of Americans who get their news from social media are in the ballpark of 42-66%.
What those numbers reflect is power, in plain and simple terms.