Sony’s patented smart contact lens technology seems straight out of sci-fi movie!
A diminutive little device, but unbelievably capable, Sony seems to have cracked many difficult stumblers (that larger brands hadn’t been able to, so far), to come up with something that is as scary as it is exciting.
A contact lens that can be worn as a regular lens is, but it comes with the ability to click photos and record videos, instantly play them back, store them internally and even transmit them to a nearby device. All in transparent, practically invisible form.
Seven inventors at the tech pioneer’s Japan office, are the brains behind the new patent, through which, Sony is going to be able to muscle it’s way into a game that so far featured players as big as Samsung and Google, and other independent intelligent minds working in different nooks and corners of the world.
The contact lens from Sony will come with the functionality of clicking photos and videos with auto focus and zoom capability, along with the ability to store them internally and play them back.
To achieve this the lens will use a combination of sensors – a piezoelectric sensor, infrared sensor and an acceleration sensor. Working in conjunction, these small electronic sensors will measure changes in pressure, temperature, acceleration and force, which the device will measure and translate into control instructions.
There’s more: the contact lens could also be equipped with gyroscope technology to correct tilted images, get rid of blur images, and control aperture.
Got your attention yet?
As exciting as it sounds, we must pause and consider the challenges and the triumph of engineering this little busybody.
The lens will be an intricate assembly of many delicate components like the main control unit, a wireless communication processing control unit, image pick-up lens and unit, antenna, sensors and a storage unit.
Once again, all that in a nearly transparent, near-invisible form! Amazing!!
The piezoelectric sensors will convert the mechanical energy from movement-nuances like pressure and force of a movement, into electrical energy which will be used to trigger and operate the lens’ functionality.
The most important part though, is controlling the lens, with no outwardly visible physical control. Here’s another mind bending achievement – the patent states that the smart sensors embedded in the lens are able to differentiate between an involuntary blink and a deliberate blink.
“It is known that a time period of usual blinking is usually 0.2 seconds to 0.4 seconds, and therefore it can be said that, in the case where the time period of blinking exceeds 0.5 seconds, the blinking is conscious blinking that is different from usual blinking (unconscious blinking)”.
Now, let me explain how will the lens work in real life.
The wearer’s eye movements will be used to guide and operate the lens as described in the patent. The patent elaborates, “the time period of the eyelid closure is sensed in accordance with output from a piezoelectric sensor provided in the lens unit“. The display control unit thus, will control the display direction of the captured image according to the tilt of the lens unit sensed by the tilt sensor.
An image pickup unit is configured to capture an image of a subject which is then stored temporarily in the storage medium; the integrated transmission unit will then transmit the captured image to an external device.
Power you ask? Well, the lens will not derive power from batteries. The power source could be a hybrid of power being generated using movement and electromagnetic conduction (where power can be drawn via radio waves or electromagnetic field resonance).
Apart from Samsung (who have patented a smart lens that can project images directly into the user’s eye), Google’s been in the smart contact lens Frey too – it’s been actively working on it’s research around contact lenses that are capable of detecting the wearer’s blood sugar levels, designed to help diabetes patients.
Taking the research further, Google filed a patent application, published earlier this month, devising contact lenses that could be injected directly into the eyes of the users!
Thus it’s safe to summarise that research around contact- and wearable-lenses is clearly gaining momentum. Time will soon tell, what technology or functionality takes traction and comes out of the labs to the consumers. This innovation will also help augmented reality to take a quantum leap forward – and that may explain the ever-growing interest in this category of products.
Will leave you with one for the road – Patent Literature 2 from Sony proposes a thin image display device in which a display unit and a lens array unit are integrally provided on a curved surface, the thin image display device being shaped to be fully wearable on an eye such as a contact lens. So Sony’s very serious about this one!
I know all this sounds very complicated and perhaps a little scary (to have a powered gizmo sitting on your cornea) – but think about it, it’s the same reservation that must’ve been felt (and later conquered) by regular contact lenses too. So, there is hope, and given the popularity of contact lenses and ever improving nano technology, this could well be a reality soon.
Meet Cortica: An Israeli AI Company That's Teaching Machines To Observe And Reason, Like Humans Do
The human brain processes all information via electrical impulses. You knew that, right? Well, that is exactly what inspired Igal Raichelgauz, CEO of Cortica, an Israel-based Artificial Intelligence startup. He saw the human brain as an electrical circuit and set out to replicate that circuitry to create an AI-based capability that would endow machines with a similar skill set.
Cortica wanted their AI to have a sight sense on par with that of humans.
And we do indeed have an astonishingly complex sight system – everything you see with your eyes, open receptors in your eyes convert to electrical signals. All that information is transferred by those signals, to a part of your brain which sorts and analyzes the color, depth, shape, and size of all those objects. This data is then received by the cortex – the part that most interests Cortica.
Remember poststructuralism? For those of you who need help with that preface, you only know a table as a table because you see it in relation to a chair. If the chair didn’t exist, how would you know what a table is, what it’s used for?
Something similar happens in your visual cortex. It classifies all the objects you see into different categories by assessing them in relation to all the objects you’ve ever come across.
That’s how you know what you just saw was a bird, or a bottle, or your friend, or anything else.
Sure, you know how little time it takes for our brain to perform the entire process since you experience it every waking moment of your life, but have you ever stopped to wonder, to revel or to acknowledge the sheer speed and processing power behind it?
You know what you saw the moment you saw it. Cortica believes it has reverse engineered this process, replicated the biological visual cortex of humans.
Guess how they achieved that?
They worked on a piece of rat brain, a piece that is still living. Yup, you read that right!
The brain gave them access to the electrical interface of all the neurons contained in that tissue. They were able to understand the input-output process of the neurons. They discovered that with some modifications, a neural network could create a “conceptual signature” – without any prior training. It would be able to recognize similar objects, and differentiate them from others.
Such an AI would be able to learn by itself, much like babies do – by observation and reasoning. While we observe and learn from the world around us, it would do the same from the data available on the web.
This is Cortica’s own, unique approach to what is called ‘unsupervised learning’ within the field of artificial intelligence.
Just so you’re on the same page, there are 3 kinds of machine learning – supervised, unsupervised and semi-supervised.
Supervised learning is when you teach the AI from a pre-determined data set, so you already know the output. This is the most commonly used one.
Unsupervised learning is when you give the AI no prior training, and you tell it to solve the problem with only the necessary input. The output from such an algorithm is unknown. For instance, you want your AI to categorize certain geometrical shapes into matching groups.
If you’re using supervised learning, you would have taught the AI about circles, squares, hexagons etc. before giving it the problem. In unsupervised learning, however, you would teach your AI nothing before asking it to solve the problem. It would see the various shapes, categorize them based on similarity, and give its own label to them. This is a process much more difficult to teach an AI.
Semi-supervised learning falls between these two. The AI would have an incomplete set of reference data, and it would hazard the best possible guess based on the limited data it has, and it’s own abilities to extrapolate the data.
Now do you see the ramifications of what Cortica has achieved? Two words – it’s huge!
But Cortica isn’t completely done yet. There’s still time before the technology enters the consumer industry, but Cortica claims to have created an AI that can see and process information like humans.
So many possibilities!
Self-driving cars have already entered the marketplace. But imagine if they could actually recognize and understand what an object or obstacle ahead of them is. The car would stop by itself if it sees a pedestrian crossing the road, thus preventing many road accidents.
The might be able to recognise accidents on the road and could call for help independently.
Your smart home gadgets would revert to the settings that are specific to you when they see you approaching. An air conditioner could increase the temperature if it sees a child in the room, so they don’t get cold. The refrigerator could detect what groceries are finished up and remind you to get more.
Amazon’s grocery store in Seattle is already automated, but what if it could actually see you? That would even remove the need to even scan the app at the entrance. You could just walk right in and it would recognize you from its database, and be able to process you, and your purchases independently and accurately!
The possibilities are truly endless.
Other AI startups such as DeepMind, RealFace, and Genee have been acquired by Google, Apple, and Microsoft respectively. Would Cortica too become a target to be acquired, or would it be able to hold its own against them? Its technology certainly looks powerful enough.
The world is changing, friends. Get ready to see it differently, soon.
Soon You'll Be Able To Back Your Entire Computer Up To Google Drive
Google Drive is gearing up to be the answer to all your data and backup needs.
Soon, Google Drive will be able to automatically backup all the files residing in any folder on your computer that you point it to. The backup would include your computer’s desktop, files residing in your documents and all other possible locations on your computer.
This is a big change as it will mean that will no longer have to place files only in a specific ‘Drive’ folder on your computer, as you need to today.
All of this comes via an app called Backup and Sync. The app is the latest version of Google Drive for Macs and PCs, and is integrated with the Google Photos desktop loader.
From what it sounds like, this new app will replace the currently existing Google Drive app and the Google Photos backup app for computers.
The change, however, is only available to consumer users for now (those who use Google Drive for personal everyday things), and not to business users. Google is recommending that business users who have been using G Suite, for now, stick with the Google Drive for Mac/PC until the new enterprise-focused solution, Drive File Stream, is made available to them.
Drive File Stream will come with another approach altogether, which will allow users to access huge corporate data sets without taking up the equivalent space on their hard drives. The feature will definitely be something that business users will look forward to.
Once the personal version of the app goes live, users will be able to sign into the uploader via their Google Account, and then select which specific folders on their PC or Mac that they want continuously backed up to their Google Drive. It is not yet clear how much more users will be able to do with this expanded storage. The assumption is that users will be able to open and edit some common file types within Drive. It is, however, not clear that users will be able to sync those files back to the computer using the drive as an intermediary.
Another question that arises is that of the storage limit. The expanded backup will quite certainly count towards your Google storage limit too. Given that, the new app will be a quick and easy way to hit the 15 GB data storage limit that free accounts currently enjoy from Google. Users can then rent additional space from Google, which will cost them USD 2 a month for 100 GB, USD 10 for 1 TB, and USD 100 a month for 10 TB.
The new feature is definitely a smart move on Google’s part. It is a handy feature that users have been demanding from Dropbox for a while now.
Dropbox (like the Google Drive) currently required users to save files in a particular folder on their computer for them to back up. Microsoft’s OneDrive is another cloud storage service which lets users automatically back up files from their computers, but even for that users have to save the file in a particular folder, or prompt them to be saved on OneDrive while saving them in the first place.
Google’s new feature is likely to be popular with consumers looking to keep copies of their photo, video and music libraries. Given the ransomware attacks that have not faded from the memory of millions of users around the world, Google’s service might come as a relief to many.
The service was to be available from the 28th of June, but Google has postponed its release, “based on your valuable feedback, we’ve decided to delay the launch of Backup and Sync while we make improvements to the product“.
The service can be expected to be available in a few weeks’ time.
Surprise, Surprise! Apple Is Opening Up It's Secret Repair Machine To Third Party Stores
“Hey Siri, I broke the screen on my iPhone. Where can I get it fixed and how long will it take?”
You may well be able to ask Siri that pertinent question and get a surprisingly pleasing response, soon!
Apple’s customers will soon have more choices, and amenable ones, at that, when getting their broken devices repaired.
Apple, in a surprising move, is loosening its grip on “tricky” iPhone repair and allowing owners to get their devices fixed at a place other than the Apple Store.
Apple is reportedly going to do so by bringing its fabled ‘Horizon‘ machines to about 400 third-party repair centers across 25 countries by the end of 2017.
This will come as a big relief for users in certain areas where the density of Apple Stores is not too high, and thus users have to wait a long time for screen replacements and other iPhone-related issues to be fixed.
Apple has always been secretive about its tech, to a point that until now it had never even formally acknowledged the existence of the ‘Horizon’ machine.
What is the Horizon Machine?
Horizon is a machine that is integral to the repair of a damaged iPhone (or iPad). Even though it does not do any actual repairs itself, it is needed to calibrate iPhone display repairs on complex technologies like 3D Touch and home button malfunctions.
What makes this machine more important is that it is only this machine that is authorised to install and implement a replacement fingerprint sensor, as other repair procedures won’t be able to tell the iPhone’s processor to accept the new hardware. If you remember the infamous “Error 53” that had struck iPhones in January 2016, bricking them with no forewarning.
The machine has the ability to access every part of the iPhone. The machine works to calibrate the phone, meaning that it can also connect to iOS itself and potentially give access to proprietary software. Apple has always stated this, claiming that giving such machines to third-party vendors opened up its phones to hacker attacks. Apple now seems to be softening on that position.
Without this machine, smaller stores had been limited in the extent of repair that they could conduct. Such stores were then just collection points, and had to send the device to centralised centres for more extensive (and intensive) repairs.
Bringing the machine to more stores, third party stores specifically, is a surprising move on Apple’s part, as the tech giant has always kept this tech under strict lock and key.
The Cupertino-based giant has been running this decentralisation with a small number of outlets across the world, as a pilot program for about a year now.
One of the chain of stores that was a part of the pilot program is Best Buy, which has had a Horizon machine secretly installed in one of its Miami stores.
Some stores in London, Shanghai, and Singapore were also amongst the early recipients of the machine, in the pilot program.
Another retail chain, ComputerCare, is expected to get the machine in their stores soon.
“We’ve been on a quest to expand our reach“, said Brian Naumann, Senior Director of Service Operations at Apple. He also went on to add that one of the reasons that Apple is taking this step is because repair wait times have grown manifolds at some of the company’s busiest retail stores – and has become a major sore-point for the customers, who, of course, want their devices fixed as soon as possible.
Critics have believed that Apple has been so secretive of their repair technology to maintain the revenue stream from the repair of their devices.
While Apple has never disclosed the amount it earns through repairs, but industry analysts place the amount between USD 1-2 billion a year. Considering that the entire smartphone repair business worldwide is estimated to be in the ballpark of USD 5 billion, that is a significant portion of the pie that Apple has been raking in.
In the defense of Apple, however, they got into the repair business just three years ago with the introduction of the iPhone 5. Before that, they would charge a customer with a severely damaged device a “repair fee” and simply replace their device with a refurbished, or new one.
Apple is starting the roll out with machines in around 200 of Apple’s 4,800 authorized service centers over the next few months, including places like Colombia, Norway and South Korea where it doesn’t have a retail presence. The number is expected to double by next year.
We have our fingers crossed for some stores in India to get it too.
How many of us are scared to send our children out to play because of the fear of accidents? A lot, right?
Well, it stands to good reason. The increase in death statistics owes its majority to car accidents.
Distracted drivers have quick become the bane of the roads. Texting or being on a call while driving have become the two primary reasons for loss of lives – untimely and tragic ends.
Yet, no amount of persuasion, seems to convince some people to let go of this fatal habit.
Now what happens when people don’t willingly let go of bad habits? Some external force usually has to intervene, and in this case, Apple is this becoming the first form of external force that could potentially stop people from using their gadgets while driving.
Apple, per recent news, was granted a patent for “Detecting Controllers in Vehicles Using Wearable Devices”.
Restating in plain English, this patent implies that Apple will use the in-built motion detection features in a device, say an Apple Watch, to determine whether the person is driving the vehicle or not, and if so, the wearable will then automatically regulate the amount of notifications that the driver receives.
The motion sensors gather and feed the information into an associated system, who in turn, figures out the angular velocity of the device. That done, the system establishes if the velocity of the wearer is below the programatically-mandated minimum threshold or not.
If the velocity is below the threshold then the incoming flow of notifications is not affected, but if the velocity level is above the threshold, then the system does a double take – it approximates the direction of gravity of the reporting wearable, as well as the gravity from another device present in the vehicle (which could well be an iPhone, or perhaps another phone). All said, the system then automatically interdicts notifications sent to the wearable as well as the phone.
This new patent can have a positive effect on the drivers who get distracted while driving because of the constant need to keep checking their phones to see if any new notifications have arrived or not.
This is a revolutionary step towards making driving safe along with that it is also a big step towards protecting and safeguarding the other commuters on the road.
This shows that perhaps we are a step closer towards making our travels safe.
There are two itsy-bitsy downers though – one, we don’t know when this would come to be – since we only know of it as a patent at this stage and can’t estimate the progress that Apple’s actually made in turning this into a real-world product/feature. Also, we aren’t sure if this feature would come to existing Apple products or forthcoming ones.
Second, this feature seems restricted to only Apple devices, we’d don’t yet know how much of cross-platform integration Apple would allow for something it’s patented.
That said, several people have come out in favour of this patent and a lot of us are now eager to see how well they’d put up with the newfound-old solace of driving in silence and (largely) at peace.
For close to ten years, Apple’s iPhone has been one of the torchbearers of the smartphone industry – keeping its consumers at the leading edge of technology, and compelling it’s competition to constantly innovate in order to stay relevant.
Announcements of new iPhone models churn the market with immeasurable excitement – however, over the last four years, the design ethos of the iPhone has seen marginal changes. Thus, while there’s a lot of excitement before the unveil, a lot of it deflates rapidly after Apple’s keynote event.
The only positive of the disappointment is (and I am being extremely brave calling it that), that the world begins holding it’s breath for the next September, and demand becomes pent up all over again.
It’s no different this year. There’s a lot of anticipation, and while the last three years’ disappointment is causing people to be very circumspect with their hopes and desires, however this year, there’s a new ingredient in the mix that is fanning some additional hope.
2017 will mark the tenth anniversary of the iPhone and people are hoping that the American tech giant has been building something truly remarkable and different, in their high-tech cave at Cupertino – in that milestone’s, and Jobs’ honour.
Trade pundits, while cautious, are predicting that Apple will most likely appease the market with substantial design changes on their upcoming iPhone 8 model (we’re assuming that is what it would be called on release, although a simple ‘iPhone’ moniker could well be used instead) as an attempt to woo old consumers and attract new ones.
Well, if there’s a bunch of people even more excitable than customers, it is Trade Pundits. As is always the case at this time of the year, they’ve been watching Apple, it’s supply chain, patent approvals and market acquisitions with an eagle’s eye.
Thanks to their optimism and focus on these telltale signs, a lot of rumours that have been doing the rounds, majority of which have come from credible sources.
More than ten prototypes are said to be under testing, to arrive at a decision for the final design. However, the implication of so many prototypes being considered is that it leaves us outsiders with a whole milieu of possibilities – most of which are wishful thinking on our part.
That said, basis whatever information is available as conjecture, we are listing the most prolific speculations being derived from the river of rumours. Bear in mind though, that none of these have been officially confirmed in any way yet.
Given the fact, that the last three iPhone models have looked nearly identical, consumer sentiment yields a “bored” expression. Thus, the pundits’ assumption that the iPhone’s tenth anniversary would be the perfect environment and time for a major design revamp, holds some water.
In what the Trade Pundits are claiming as another departure from standard practice, they’re saying Apple is going to be releasing three versions of the iPhone this year. The OLED-enabled iPhone 8 could be positioning of the as their top-line ‘premium’ model, towering over two regular-style LCD-equipped models.
Me? I’m holding my verdict at this stage, because I’m reading elsewhere that Apple’s facing some supply-chain issues with some parts, that may actually delay the release of their relaunch vehicle by a few months.
While I’m reading those stories in several channels, I am also reminded of two truths – Apple makes things happen – so if supply chain is holding up release dates, Apple will ensure that roadblocks would be cleared in time for the September launch as it leads up beautifully into the Christmas rush (which is usually Apple’s goldmine i.e. harvesting period). I believe they make more money during Christmas than they do any other time of the year – China and India notwithstanding.
The second truth is that Tim Cook was the person who set up Apple’s supply chain – digging up gold where no one believed any existed, identifying and contracting partners that no one even knew existed. So effective was Cook as a negotiator and so strong are Apple’s contracts, that if there’s any human way to meet timelines, Apple will get there.
So, I still have reason to believe that September will be a fair-weather month. Hang in there folks!
BlackBerry Branded Appliances And Wearables Might Soon Be A Thing
One of the biggest mobile giants of yesteryear, BlackBerry was known for its various business phones, secure software and most lovingly, its revolutionary messaging app, the BlackBerry Messenger.
I still remember how popular BBM was a just of years ago and considering that everyone from businessmen to young adults and even teens were hooked to the first real Instant Messaging app for their phones. To me, in many ways, WhatsApp is the progeny of the BBM.
However, all was not well in the BlackBerry world, and soon with the advent of iOS and Android Blackberry had to bid farewell.
Almost relegated like a relic of the past, BlackBerry recently shocked the world with its announcement, we, like others, can’t contain our excitement.
As we’d written a few days ago, BlackBerry’s pivoting to a lot of new stuff (you should read that article of ours – to know the kind of exciting stuff BlackBerry’s getting into).
Not only is BlackBerry licensing it’s wares outside the smartphone world, the Canadian company is also going to manufacture several gadgets outside the smartphone domain. BlackBerry has now also allowed other firms to borrow its brand tag and enjoy full support from Blackberry in this endeavour.
Going the licensing way benefits both, the firm in question and Blackberry, and will help the ailing Canadian behemoth to expand its horizons in a multi-dimensional fashion.
BlackBerry wants to bring back its one-of-a-kind security protocol and data encryption capabilities. Things seem to have improved for BlackBerry after it joined hands with TCL in a licensing agreement last year.
“Tablets, wearables, medical devices, appliances point-of-scale terminals and other smartphones” are some of the gadgets that will have the Blackberry name on them and this new venture is referred to as the “next phase” of the company. In October, 2016 it was reported that Blackberry had tied up with Ford Motor Co. to develop a special software for the vehicle.
Moving to medicine was another unforeseen pivot. While using any connected medical care device or service, it is of primal importance that patient data is free from external threats and is safely connected with the healthcare system.
Armed with one of the best security encryptions anywhere in the world, BlackBerry jumped in to help ensure that all the data saved on associated mobile devices cannot be hacked. Just the name, should give people a lot of confidence in such devices and services.
Each of these Blackberry proteges will have technology or code from BlackBerry, which is going to have strong resemblance with its rapidly evolving Android model.
The wind has it that BlackBerry-tech based wearables are on their way. Made by non-BlackBerry manufacturers, these will be the beginning of a new line of secure peripherals. Although the details haven’t yet been revealed, but once Blackberry’s wearables and appliances hit the market, the competition will get serious. Apple, probably the biggest nemesis of BlackBerry the smartphone maker, will have to watch out – because BlackBerry’s comeback looks extremely promising and the brand seems like a very determined phoenix about to make a re-entry.
You don’t need to be Batman to order a supply drop from Alfred, courtesy Amazon Prime Air facility. Three years ago Jeff Bezos coolly pulled out a drone supply initiative from Amazon, which most critics were quick to dismiss as a publicity stunt. The drone-delivery system looked straight out from a Philip K Dick novel, and the drone technology was still unwieldy and costly to built. Not today. The airspace is filled with drones, so much so, that countries like India are trying to regulate their use while forming regulations for it. Amazon is determined to push through government regulations and start drone delivery services in near future.
The company showcased its first public test flight showcasing its “Prime Air” drone delivery system at MARS, a private Amazon tech conference in Palm Springs, California, where the drone delivered a sunscreen. The company has been privately developing its drone-delivery services in the relatively quiet area of Cambridge England, where the team has been simulating real life scenarios incurred during real life drone deliveries, while the company waits for Federal Aviation Administration, who just like their Indian Counterparts, are still to formulate rules for the drone flying in the urban populated area.
Drone flying is still viewed as an amateur fun activity which can act as a nuisance to the defense and civilian aviation. The stereotype is mostly because of its military use in the gulf theatre, which is still imprinted on the public imagination.
Drone delivery will be done from select Amazon points which will house the delivery drones. Once you place the order, select “Prime Air” 30-minute delivery and your packages would be fed through a conveyor system to the drones for delivery. The drones are equipped with censors allowing them access to real-time information about the environment and obstacles such as birds and other drones etc using “sense and avoid” technology. At the landing point, the drone would land on a safe zone signified by a prior placed amazon logo mat, delivering products in no more than 30 minutes. After dropping the package, the drones will head back to their recharging station allowing them to recharge for a new delivery.
The essence of this type of delivery system is time-management. The drones can deliver object that can be crucial to save someone’s life, such as medicines and life saving equipment, however for the time being such thoughts are just pipe dreams. The company has been doing real-life test for long-and various factors such as drone battery ,payload weight and range come into play. In case of any contingency, the drones are armed with various responses, they are figured to avoid certain ubiquitous obstacles such as clotheslines and trees. They have even being programmed with a dog barking sound to alert the others.
But fear not, we believe Moore’s law and it would take probably just a year or two before we see them flying in our neighborhood. With Google and UPS also in the race to claim a substantial part of this pie, we should be seeing real time manifestations soon.
Apple never fails to amaze us. But I think it also takes equal pleasure in confusing us!
Famous for its innovative tech, Apple also continues to earn patents for next-gen tech that industry watchers like us keep reading about, and salivating over- hoping that the next device from Apple carries the latest tidbit we happen to spot.
Keeping that mischievous tradition alive, the next up in the list of “we want” tech is for a recently-granted patent of a woven display!
The U.S. Patent and Trademark office has recently awarded Apple with that patent (filed back in May 2014) which is the result of efforts made by inventors Douglas J. Weber and Teodor Dabov.
So, what is a woven display?
Apple’s patent describes the use of a proprietary method of weaving light transmissive fibres into conventional textiles to get a visual display.
The interesting part is that these fibers would not conduct electricity and thus will not have light of their own, however they’ll be used to carry the luminance being transferred from the source (external LEDs or an external electric base), which will allow them to have varying optical properties.
These light tubes or light pipes as they are currently referred to, are optical waveguides used for transporting or distributing light for the purpose of illumination. Imagine them to be like threads running from one point to another carrying (not creating) signals.
Modern weaving, braiding, and knitting technology will be used along with three-dimensional knitting tools capable of producing flexible fiber band materials, to create fabric materials that would be difficult or impossible to implement using other fabrication technologies.
A schematic diagram of a weaving system that may be used to weave fibers is shown below:
The idea of a woven display leads to a flexible display.
While plenty of other tech firms are working on bendable, flexible and foldable screens, Apple’s approach is novel and it has immense advantages over other innovators’ approaches.
Images made possible through the fibers could prove to be a boon in the sphere of Wearable devices, where the currently-unused surfaces on clothes (like sleeves or cuffs) could be converted to visually capable real-estate! Or it could convert your every-day sports accessories like a wrist band, for example, to act as extended add-ons to your devices.
Isn’t that an amazing prospect!
Apple could perhaps want to kick-start the use of this proprietary technology on their Apple Watch’s bands – allowing them to have capability to display notifications or to show you your heart rate etc.
We grab this hint from the company’s own belief that the strap has so far, not been used to its fullest potential: ”While useful for such purposes, these tethers are generally decorative and serve no useful information providing, or other utilitarian, function other than for aesthetic purposes”.
The woven display for now seems more along the line of basic display of notifications. The notifications could be basic and mimic a digital watch like display, allowing for a passive display of missed calls or messages, exercise data like steps, calories burnt, steps climbed, etc.
This could help save the precious battery power while allowing the wearer access simple data.
This is not new, Alcatel Hero 2 had a snap on front cover which allowed for basic notifications like time, sms and email, just that it was not flexible.
Now, with this technology, if this capability can be woven into a flexible cloth like material, the adaptations can be numerous, allowing the wearables to become truly communicative.
Google actually has Project Jacquard, a division within the company’s Advanced Technology and Projects that makes it possible to weave touch- and gesture-interactivity into any textile using standard, industrial looms.
Jacquard yarn structures combine thin, metallic alloys instead of light transmissive fibers with natural and synthetic yarns like cotton, polyester, or silk, making the yarn strong enough to be woven on any industrial loom. Their conductive yarns with touch and gesture capability can be woven anywhere. This March, they did showcase their collaboration with Levi’s in the form of a jacket, which could be washed.
This has the potential to allows us to transform everyday objects such as clothes and furniture into interactive surfaces. Similar to the tech patented by Apple, Google’s conductive yarn needs to be connected to a base which is the brains, while the conductive patch is an extension allowing the user ease in accessing and interacting with the interface.
Such progressive research that will create flexible screens and adventurous new surfaces will allow us to ingrain technology into our daily lives, making our interaction with technology more tactile and will allow us to consume it seamlessly.
Do read our Radar and Tech ShowCase sections for technology that is going to creep into your lives in the near future!
Apple May Be Surprising Us With It's Strategy For Cars
Apple is like one of those reticent movie stars who prefer to ignore gossip about them, rather than to comment upon the conjecture, to prove it one way or another.
And this mysterious demeanour works for them.
After years of conjecture on the issue, we (the outsiders) might have just caught a lucky break.
Some government documentation has let a little kitten out for the bag about Apple’s self-driving car, that provides the clearest indication yet, that Apple has plans for self-driving cars.
Even though there was no prior news of Apple having filed for a permit, yet the website of the California’s bureau of Driving Motor Vehicles (DMV) reflects that Apple has now received a permit for three Lexus RH 420h luxury hybrid sports utility vehicles to ply on public roads and undergo testing. With this permit, Apple is joining 29 other companies that currently have permits to run test vehicles in California.
The permit also authorizes six drivers to take charge of the vehicles, if necessary, during the course of this testing. In an interesting coincidence, Google too, in its early days, had used Lexus SUVs outfitted with cameras and laser sensors.
The laws for testing self-driving vehicles in California are quite strict, to a point that Uber, back in the day, had actually chosen take it’s vehicles to Arizona for testing instead of waiting for California to acquiesce. Uber did file for the permit and receive it eventually, and now also runs testing vehicles in California.
When we were discussing this internally at Chip-Monks, we realised that this sanction raised two big questions that we needed to find answers to:
One, what it means for Apple’s autonomous car plans, something that Apple has been infuriatingly secretive about, so far, and,
Second, what does it mean for autonomous car market?
The answer to the first of those questions is fairly simple: It means that Apple might finally be ready to reveal what’s it’s been doing in this flavour-of-the-decade industry.
In the past, Apple has been hiring automotive experts, particularly the ones who have experience in the field of self-driving cars. There has also been word that Apple has a project called Project Titan for their autonomous cars, but they have never acknowledged the existence of such a project. This grant of the permit could imply that Apple has made progress with this project and might be now ready to lift the blinds. ‘Might‘ being the key word there, though.
As for the answer to the second of those questions is concerned, that might be a little complicated.
The autonomous car market, at the moment, is working with two primary approaches. While players like Google consider autonomous cars to be a potential new market, where individuals would want to get their own cars, and it becomes another saleable product-revolutionary, yes, but saleable, as well.
They are joined in by brands like Tesla, which has already been making, and selling,self-driving cars, in different stages of automation, for a while now. Companies like Ford and General Motors, already existing automotive brands, view this as an extension of their already existing business.
On the other hand are the likes of Uber, which have an entirely different approach. Their focus is on eliminating the human driver, so that a car can function as a service, which can be availed, anywhere, anytime. So, basically, it will just be another cab, but it won’t need a driver.
Now, the entry of Apple might usher in a third approach – one of not wanting to build its own autonomous automobile, instead focusing on creating the software to enable these “pods”. Such software can be deployed in partnership with existing carmakers. And that is an approach that actually makes quite a lot of sense, for obvious reasons.
Car makers themselves do not have the necessary expertise for the software side of things; and, software makers in turn, do not have any expertise of designing and building cars.
Its an obvious paradigm; you can only be expected to know what you trade in.
Now, instead of expecting the former to walk the talk of the latter, or do the same with the latter, a better idea is to get them to work together.
In simpler words: car makers make cars. Software makers make software. Put the expertise of the two together in harmony, and they will deliver better results.
That said, we are not yet completely sure of what Apple is actually planning to do in this line. Whether Apple actually acts on this and puts some cars on the road is yet to be seen. They’ve been the ones to have bigger plans, and quite secretive ones at that, so the best thing to do is to wait for the coin to land before calling it a head or a tail.
Let’s just wait for the Wheel of Time to turn and see how Apple plays this.
David Kosslyn and Ian Thompson, the founders of Angle Technologies, lead a stealth project that may well render the current tech supporting Virtual Reality devices, on their silicon heads.
Backed by USD 8 million in funding, the duo have been keeping a low profile and an even tighter lid on their progress. Unwilling to disclose any details that would indicate the magnitude or the capabilities of the VR technology that they’re building, they did share some insights that they’d distilled from their trial and error efforts to increase the efficiency and the quality of their VR projections.
According to them, the method they uncovered has already been in use in other industries, but when used in the VR industry, it will alter the relationship between computer hardware and software as we know it. They go as far as hypothesising that it might be the future of VR programming!
What these two developers are using is a frequent life-saver in Game programming. And here is how it can be the life-saver for VR programmers too:
When weighed against the CPU’s abilities to render all these elements, GPUs do even more.
Kosslyn and Thompson recollect how they were primarily using the CPU to install a plethora of elements individually like trees, bushes, leaves etc, when they were hit with the inevitable – loading each element took as much as one fifth of a millisecond too much. Considering the fact that close to a million such elements have to be rendered while developing and while in use by the user, the time taken to just load the world, would have been extremely slow.
For the user, to wait, for a program for so long, would be a development catastrophe. Hence, the usage of the GPU became a necessity.
“We’re mostly trying to fill in those few milliseconds when GPUs can do work besides rendering graphics”, Thompson says.
Along with Angle, Nvidia, which is probably the most respected GPU producer, suggests that GPUs would be the best choice for VR. Even insinuating that the idea of using GPUs in VR headsets would be a boon, given hundreds of these chips can fit into it. And that might just be the answer as more graphics intensive programs are added to VR.
On the contrary, Intel which is Nvidia’s competitor, disagrees, primarily because it does not produce these chips, but has subsidiary companies which it bought working in the VR space.
But as we see it now, it is highly likely that GPU will be the answer to all future VR technology and that VR’s emergence in smartphones might be facilitated by the power of these unsung heroes.
When it comes to machine-based deliveries, airborne drones get a lion’s share of attention and mindspace – thanks to Amazon’s and Google’s super-publicised attempts to conquer the space first. Unfortunately, equally adept (and in many ways, more immediately doable) land-based experiments have been left out of the limelight.
A San Francisco-based startup called Marble who’ve been conducting food deliveries via robot might be about to rectify that oversight.
Marble’s robots are washing machine sized robots, that roll about delivering food autonomously. The practice is simple – orders are received via an app, a robot then rolls over to correct restaurant, gets loaded up with the food, and then rolls on over to the customer’s location – all in minutes.
When the robot arrives, the recipient punches in a PIN they received upon the confirmation of the order, and the robot opens up its loading bay for food to be picked out of.
People would undoubtedly be patting the robot on the head, the way’d do to a cute kid doing the delivery run!
The technology of course, is still work in progress. Marble has already built a fleet that is they’re using to run local food deliveries in San Francisco’s Mission District, as their pilot program.
And this pilot program, like any other, needs to be monitored very closely. Hence a human minder walks alongside the robot. Even though the robots have been designed to function autonomously, Marble is observing performance closely and fine-tuning the tech and software to hit the road running (so to speak) when the time comes to expand operations.
In addition to the walk-along minder, each robot is also constantly watched by remote personnel, via a video camera. So, if anything goes wrong, the human minders are there to ensure the safety of the robot, as well as others around it.
Even though this approach eliminates any potential savings from eliminating human personnel at this point, it is not much different from how autonomous cars in testing have to be accompanied by a driver and an engineer even though they are fully capable of functioning entirely on their own.
On first glance, the robot is a little bulkier than one expects, and it’s not even the most visually appealing tech one has seen. But these are early days, and as the product evolves, it’s going to become much sexier!
When the technology progresses enough that a human minder is not needed to shadow the robot anymore, the robot would then be able to make faster deliveries, and would also be far more cost-efficient.
The question thus is of ‘when’ and not of ‘if’; robots doing deliveries are definitely coming to our lives, sooner than later.
And if you thought Marble was alone in staking out this technology – far from it. There are more, working on a technology of this kind, across different countries.
Most noticeably, there’s Starship Technologies, Marble’s leading competitor, also based out of San Francisco, who have had similar programs running since January 2017.
Looks aside, one would have to blind not to see that the proposition has tremendous potential -it’s not just food that can be delivered by platform of this kind; a fully empowered fleet of this kind could account for a lot of other ground-based cargo as well – couriers, mail, supplies and perhaps even pets!
Technology of this kind can then be instrumental in revolutionising ground based logistics.
It was Amazon that last did something of the kind, when they brought out their two-day and one-day delivery schemes, something that has become a sought-after standard in e-commerce the world over now. But one must be circumspect; Amazon had to work relentlessly to devise a system to ensure a delivery time that short could actually be maintained.
All these possibilities also feed into the debate between flying robots – a.k.a. drones – and ground-based robots. While flying robots will have the advantage of always being faster, they will also have more serious concerns – safety, noise, and environmental impact than ground-based ones. They might also end up being more expensive to operate, and thus not the best choice on a larger commercial scale.
On the other hand, people might ultimately find it more annoying to share their sidewalks with herds of ground-based robots than to have swarms of drones flying overhead, so you never know!
All said and done, I see most of this autonomy as inevitable, and we as the human race need to acknowledge that that day will soon dawn when we’d be spoilt rotten by it, and have nowhere to go because all we need will magically be arriving at our doorstep or falling gracefully out of the sky! Except, perhaps, Jeremy Clarkson – because hey, Jeremy is anything but graceful!
Touch has been one of the reasons that we switched from QWERTY keyboard-enabled phones to touchscreen phones – because touch is the most native way in which we can interact with electronics and screens.
Yet, touch has it’s own issues – wet hands, different pressures, and most importantly, the need for a screen for us to touch, to provide instructions. Sure, voice and gesture-based controls exist, but they aren’t ubiquitous yet (least of all with accents that the Western-world-designed voice recognition tools just don’t seem to understand).
So, for the wheel of technology to roll into the next evolution and for other objects to become “smart”, there is a need to engage better with touch-based interfaces.
Researchers from the North Carolina State University have developed a new material – elastic touch sensitive fibres that can improve the device experience and spread it across different interfaces.
A paper published by Michael Dickey, a Professor of Chemical and Biomolecular Engineering at NC State University states that these soft, stretchable, microscopic fibres are able to detect touch as well as interpret pressures such as strain and twisting.
These fibres are composited of cylindrical polymer strands that are filled with a liquid metal alloy. The alloy itself consists of eutectic gallium and indium (EGaIn).
The thickness of the fibres is highly microscopic – a few hundred microns in diameter – which is comparable to a human hair.
The size help in integrating them with electronics which are unconventionally placed and liable to be malleable and amenable to the stress of daily wear and tear – such as wearable devices.
How are these fibres used?
Well, they are usually twisted together to form a spiral. Each fibre consists of just three strands, each with a differing density of EGaln – one is completely filled with EGaln, another filled only about two-thirds, and the last being only one-third filled.
Like with regular screens, Capacitance enables this fibre’s capabilities.
You might remember this from you high school science textbooks – an electric charge is stored between two conductors, which are in turn separated by an insulator.
Regular glass-based screens work the same way – when you touch the bottom of your touchscreen, the capacitance which is between your finger and the material beneath the screen gets affected – and the screen interprets your intentions from your actions basis this change in capacitance.
Imagine this technology being woven into the fabric of your shirt… You could give commands to your smartphone by simply touching the collar of your shirt, or touch your cuff to call someone, while tapping the pocket might issue a different action. Sounds awesome, no?
We still don’t know about the University’s plans for this technology yet, and we also estimate that it will still take more some time for this budding technology to be incorporated into washable fabrics.
The research doesn’t end just here. The researchers have developed a sensor using two polymer strands-each of them are completely filled with EGaln.
Just like others, these threads are twisted into a spiral, so when you increase the number of twists, the tubes get closer. The capacitance between the two threads is thus affected, giving a notation on the change.
According to Dickey, they can tell the number of times the fiber has been twisted, just by the change in capacitance – “That’s valuable for use in torsion sensors, which measure how many times, and how quickly, something revolves. The advantage of our sensor is that it is built from elastic materials and can, therefore, be twisted 100 times more — two orders of magnitude — than existing torsion sensors”.
Well, there are bound to be lots of applications for this revolutionary technology – think of communiction, control, emergency services and something as simple as opening your door – the opportunities are limitless.
Time is of the essence though, as newer and more complicated technologies may wrest some of the opportunities for this new fibre.
New Chefs In The Apple Health Kitchen: Diabetes Specialists
Apple has recently hired a bunch of biomedical engineers as a part of what seems to be a secret mission to fight diabetes. As initially envisioned by late Apple co-founder Steve Jobs, this would be an R&D program to develop sensors to fight diabetes, by monitoring glucose levels.
While the company has for now declined to make a statement in this regard, many people supposedly familiar with the matter have come forward to share their “knowledge”.
The team is said to work at a nondescript office in Palo Alto, California, in close proximity to the Silicon Valley headquarters. While we do not know the details of the project yet, we do believe this is an adventure to create ‘breakthrough’ wearable devices that detect the disease and monitor blood-sugar levels.
The reason that this could prove to be instrumental in the field of medicine is because up until now it is impossible to monitor sugar levels without breaking through the skin. Electronic diabetes detection devices have proven to be lifesavers for the hundreds of millions of people who are affected by the ailment, but all of them require plucking through the skin to get blood, to discern the sugar level.
“There is a cemetery full of efforts to measure glucose in a non-invasive way“, said DexCom chief executive Terrance Gregg, whose firm is known for minimally invasive blood-sugar techniques. “To succeed would require several hundred million dollars or even a billion dollars“.
What Apple has is much more than that, so it may well be investing some of it to solve this biggie.
Reports state that about 30 people are working on this project now, and the project has been in folds for about five years now. Reports also state that the team has been carrying out clinical trials in San Francisco, the results of which have not been revealed yet.
In addition, they have also reportedly hired consultants to look into the rules and regulations around bringing such a product to market.
For those of you who might be a little surprised, Apple, yes, the makers of the iPhone and the iPad, also have a secret workshop that they have had running for a while now. In this R&D workshop, they have been known to work on many non-phone related products, most of which are experimental for now.
This speaks to the larger Silicon Valley trend that Google, Microsoft, Facebook and the likes have also been feeding into, through their R&D divisions. From Artificial Intelligence, to automated cars, to technology that works with medicine – they’ve got a lot going on in their backyards.
The news of the project comes at a time when the line between pharmaceuticals and technology seems to be blurring, and quite fast. While on the one hand, you have scientists detecting rare genetic disorders wth facial recognition technology, on the other you have Elon Musk’s Neuralink that plans to work on the much risky uncharted territory of the brain.
The approach most companies are taking is of combining biology, software, and hardware, to tackle chronic diseases using high-tech devices. This has led to the jump-start of a novel field of medicine called bioelectronics, and it’s gratifying to see that Apple is not the only player in the game on this one.
It was last year that another biggie came into the scene when GlaxoSmithKline Plc and Google’s parent Alphabet Inc. joined hands and unveiled a company aimed at making bioelectronic devices to fight illness by attaching to individual nerves. U.S. biotech firms Setpoint Medical and EnteroMedics have already shown that strides can be made with bioelectronics in treating rheumatoid arthritis and suppressing appetite in the obese. Medtronic Plc., Proteus Digital Technology, Sanofi SA, and Biogen Inc., are others that are playing in the field, trying to make a mark in this extremely interesting field.
Specifically, in the field of diabetes, Virta is a fairly new startup, which is working on tackling type 2 diabetes, to completely cure patients by remotely monitoring behavior. Livongo Health is another startup, which has recently raised about USD 52 million to launch its blood sugar monitoring product. Alphabet too is involved, via it’s subsidiary Verily who’s tried to tackle this big one with a smart contact lens that measures blood glucose levels through the eye, but that has not proven to be quite successful yet.
While we don’t know exactly what the shape of Apple’s project is, for now, yet it does seem to fit into the bigger vision of the company that Steve Jobs famously dreamed. Jobs believed Apple would one day be at the intersection of technology and biology, and making this happen would be a perfect manifestation of the same.
They are already halfway there with the Apple Watch which counts calories, and steps, takes heart rate, and other biological measures. Add this, and voila!
Siri, All Set To Tap Into iMessage And iCloud For The New 2017 iPhone Model
If you’ve been keeping up with the rumours and talks about Apple’s upcoming 2017 iPhone, you’d have read our articles about the new iPhone model’s larger OLED screen or the introduction of Augmented Reality as a prime feature on their next salvo.
But behind all the fuss around both hardware innovations, is a forgotten hero.
The software that’s going to power it all. An upgraded iOS has been released alongside every major iPhone revamp, till date. No one understands the criticality of an improved and energised software platform, better than Apple.
So, expect iOS 11, people. Not only is iOS the primary bond that has retained Apple consumers, and refrained them from shifting to a competing operating system, it has also been the very bedrock of Apple’s own growth and prosperity.
You may not have caught it so far, but patents have recently been awarded to Apple, that primarily focus on a revitalised Virtual Assistant feature – clearly hinting at a significant revamp of the iOS and how it’s next avatar will function.
Well, the patent which is for a “Virtual Assistant In A Communication Session”, lays out the basic fundamentals of the new journey. Siri, will most likely be integrated into iMessage and iCloud – which is a monumental change, much like that of the rumoured AR introduction.
The virtual assistant would be able to respond to queries made inside an iMessage chat. But does that mean that Apple will be listening in on your personal conversations?
iMessage already is end-to-end encrypted and it is highly unlikely that Apple would compromise on user privacy for the sake of bringing Siri to iMessage.
To protect the privacy of its consumers, typically, Apple has made it quite transparent in their patent that members of an iMessage chat would be notified that at least one of them is using the Siri assistant within the chat session. And that the users would be privy to, and would be the authorizing party that would censor what personal data Siri can access.
On top of that, Apple is also planning on allowing Siri to make payments on behalf of the user, by choosing the suitable payment app when the user asks Siri to do so during the iMessage session. Users can currently make PayPal payments using Siri, but not while accessing iMessage. The transaction would have to be authorized using the Touch ID. This peer-to-peer payment system riding on an already end-to-end encrypted messaging session would be an impressive addition to the features already being rumoured for the next iPhone(s).
The extent of Siri’s reach might not just be limited up to the iMessage, but might even gain enough powers over the iCloud to access data from any other Apple device the user owns. Using the Apple ID, the information from the user’s devices would be derived and the necessary action and responses would be offered to devices across the operating system including the Mac, iPhone and even an iPad.
But here is the value judgement that you or any other Apple user/enthusiast should make.
Google has already beaten Apple to the stadium as its Google Allo app already provides similar services. The only difference is the fact that Allo isn’t encrypted simply because features like Google Assistant tap into a user’s data to provide its services and Allo needs to communicate with Google’s servers to cater to all the requirements of the consumers.
The decision will always remain subjective, dependent on the dilemma of choosing between privacy and being the first mover.
Irrespective of that choice, the upcoming iPhone seems to be destined to become an immensely powerful ace – backed by significant changes in hardware, software and the very ecosystem supporting it.
The only thing that might hurt it’s trajectory is if we’ve been hoping too hard, and reading too much into the rumours/conjectures and dreaming up a device that Apple isn’t going to launch come September!
There’s nothing worse than wishes that crash against the rock of reality, is there? And yet, Apple won’t be to blame, because they never said they were going to wow us. We just fervently, hopelessly and oh-so-desperately want them to!
Phone Brands Shifting Focus To Brick And Mortar Stores In India - Here's Why
The differences in the prices of smartphones between online and offline stores are expected to diminish soon, with the implementation of the Goods and Service Tax (GST) – which is due to roll out on July 1.
In preparation of this transition, smartphone companies such as Asus, InFocus, Xiaomi, Motorola, ZTE and Huawei have had to come up with a new and more efficient strategy to retain the demand for their smartphones, in the offline market.
Currently, when you buy a device online, you find it at least a couple thousand bucks cheaper than you would in a neighborhood store. For brands like Xiaomi, and Motorola, that have majorly stuck to online stores so far, this plays into their court; they already have comparatively lower prices, and they can sell their devices at a lower tax rate online.
Presently, online sellers based in areas like Bengaluru and Hyderabad sell smartphones at a lower VAT (Value Added Tax) i.e. 5%, than those who are based in locations where the VAT% on smartphone devices are much higher (usually in the 10-15% range).
The national average is about 12%.
It is this imbalance in the VAT levied, that will soon become uniform across the nation via the GST since it is a national tax, and not a state-drive one.
So, even though these brands have off and on, been working on their offline sales strategies, to sell to the larger group of Indians that are not online, their focus has been the urban educated buyer who is already online. A change in this focus seems around the corner now, but the reason might not necessarily be a want for further expansion into the market; the reason this time is the need to get a better grip on the offline market before the playing field is leveled.
These brands have chalked out some novel plans of action to enhance the sale of their devices in India’s challenging market. Direct distribution, a partnership with large-format retail, building separate models for the offline market, putting together their own stores, expanding marketing expenditure – are some of the ways in which the smartphone makers are planning their extension.
“There is a scramble amongst online smartphone brands to expand into offline retail. While a couple of brands like Xiaomi and Huawei are intensifying efforts, most others are making fresh attempts. With GST, the value added tax (VAT) advantage, which the online sellers enjoy, will disappear completely, making online and offline a much more level playing field”, announced cellphone retail chain Hotspot’s director, Subhasish Mohanty.
With the new approach that the brands are gearing up to adopt, they would directly sell the smartphone to the retail stores – not just any retail stores though – only stores that they have collaborated with.
Xiaomi, for the same, has recently collaborated with four of the major South Indian retail stores, namely, Sangeetha, Poorvika, BigC and LOT. The Chinese budget brand also plans to set up self-owned Mi Home stores in India, just like the ones they have in China.
Asus is another Chinese brand that has mostly had an online presence in the country so far, and is now planning on expanding into the offline market.
InFocus, a Foxconn-owned brand, which plans to invest big money in offline trade and marketing replicating the strategy of Chinese rivals, Oppo and Vivo, too, is re-launching its offline business and building a portfolio of models.
ZTE is also going into offline expansion, including expansion into smaller towns, and so is Huawei.
These changes are going to be interesting not just for the smartphones they bring, but also for the Indian e-commerce market, given that the business of smartphones is quite a chunk of it. It is because of that, that companies such as Amazon and Flipkart are drawing up plans to foray into the offline distribution of smartphones for brands like Coolpad, OnePlus and Lenovo.
This, altogether, could be an interesting change in the smartphone world. Bigger brands such as Samsung, LG, HTC etc., already sell through their offline stores heavily in India. Even Apple has third party reseller stores in the country and is soon opening up its own stores.
Thus, these “economical” brands might find it difficult to sink their teeth in to a market that is already quite populated, and to an extent, these brands may be outclassed by the larger ones.
On the other hand, they might also be welcomed open armed, given how well they’ve done through their online channels so far.
A year back when the Galaxy Note7 was released, it was touted as a revolution in the smartphone market. However, with issues pertaining to batteries that would heat up very quickly coupled with some phones burning or even exploding, it turned out to be a tough year for the Korean electronics giant.
With the scheduled release of the new Samsung Galaxy S8 and Galaxy S8+ backed by the assurances of a healthy battery along with each unit of the products put for sale in the market, Samsung has managed to regain its ground, at least when it came to creating a hype in the market.
But perhaps what is really getting the market riled up is an alleged leaked image of the new Samsung Galaxy Note 8 which is poised to release around September this year. The image gives us an idea of how it might look – and going by the alleged “leaks” the Note 8 does not seem to have too many visible differences from the Galaxy S8 or S8+.
The sole reason for coining it as a possible Note 8 is the fact that an S Pen can be seen lying beside the phone in the leaked image.
What the leaked image has managed to do is, set the fuse for speculation and guesswork (oops! pun not intended!!). Given the fact that the Note 8 has to both be physically and virtually different from the S8 duo, here are a few features that might define the Note 8’s exclusivity:
Overall, the Galaxy Note 8 would need to be solid package if it has to tear people away from the Galaxy S8+ and the iPhone 8, not to mention the Xiaomi Mi 6 and the such like.
On paper, basis the leaks and our conjecture above, the Note 8 does look like, a reliable, sporty and sleek phone that would certainly be worth buying. The only issue that can be foreseen is that the iPhone 8 might overshadow the Note 8 given the proximity of its release dates.
The features of the iPhone 8 ‘seem’ far ‘better’, however, do keep in mind, that neither of the phones have any official creds from their respective manufacturers yet. Also, that the Note 8 has generally been slightly cheaper than the iPhone, it might eke a little bit of headroom there.
We’re all going to have to wait on this one, to see how much Samsung is able to bring to the party, before we can really establish if the Note 8 has enough going for it to swing the deal.
Just a few days ago, we’d written about how India was very, very far behind in developing and promoting the use of Electric Vehicles. We’d spoken about costs, poor supporting infrastructure and insufficient governmental focus on this sector as some of the debilitating elements.
Well, one of them, Governmental focus has a new, good story to tell!
Here’s another moment of pride for Indians, thanks to the stupendous people at ISRO.
The Vikram Sarabhai Space Centre (VSSC) under Indian Space Research Organization (ISRO) has successfully developed path-breaking lithium-ion batteries that are high-density units, which despite higher charge storage capacity, are actually smaller, lighter and more compact than regular batteries.
ISRO had developed their innovative lithium ion battery technology and used it for their space applications – to power satellites and other space missions. Seeing that done successfully, ISRO and ARAI (Automotive Research Association of India) began working jointly a while ago, to adapt and develop this indigenous lithium ion technology for automotive use.
One of the first things they realised was that the batteries for automotive use would need lower specifications, have different energy densities (compared to the batteries used in space), as well as be made suitable for higher ‘duty cycles’ – the number of the recharge instances and the very life of the battery, would need to be enhanced since automobiles would see more rigorous use across their lifetime.
ISRO’s capability to craft the right chemistry and to translate the technology helped them create compact lithium ion battery systems that could meet the grade set by the ARAI.
Interestingly, during the ‘develop’ phase itself, ISRO was approached by over a dozen automobile manufacturers to partner and launch electric vehicles based on their battery technology. But the government realised the potential for a larger mission and requested ISRO to make the technology available to multiple players instead of looking at a technology partnership.
Thus, ISRO will share this technology with domestic automobile manufacturers and will enable them for mass production. The information will be accessible even to private players in a true innovation-for-mankind move. The sharing of manufacturing technology will aid mass-production of batteries, thus increasing competition and help bring down prices further. That, is the government’s first priority.
So far, manufacturers have had to import lithium-ion batteries, making the final product expensive and accessible only to a few. With the technology available locally, manufacturers will be able to roll out cheaper and more efficient batteries. This will boost production to a scale that may soon enable cheaper and more reliable electric vehicles. Estimates yield that bulk production could lower prices by up to 80%, making batteries feasible for the budget-conscious Indian.
The indigenously manufactured and customised batteries have successfully cleared multiple rounds of tests. And not just lab-tests – earlier in 2017, an electric two-wheeler prototype was rolled out, powered by the indigenous lithium-ion battery.
As per reports, Mahindra Renault, Hyundai, Nissan, Tata Motors, High Energy Batteries, BHEL and Indian Oil are interested in indigenous production and are expected to incorporate the technology into upcoming products and vehicles in the coming years.
Clearly, the government is looking to boost the sale of electric vehicles to solve the problem of air pollution that Indian cities are currently besieged with. Delhi has long been listed among the top 10 most polluted cities worldwide. And while the Indian government has tried various measures to bring down pollution, there’s not been any significant effect.
With the situation not improving, there’s been a rise in sales of air purifiers, with even bottled pure air entering the market!
The government is now relying on the value-for-money appeal of budget-friendly electric vehicles to help tackle the problem of pollution in the country. Here’s a toast to them!
While ISRO had developed and delivered the prototypes to the ARAI for testing at their Pune facility, and the ARAI was expected to clear the batteries by end of 2016, the clearance is running a little behind. Get with it, guys!
We consumers have become rather difficult to impress when it comes to smartphones now. Apple came through by removing the headphone jack from their iPhone 7. And Samsung too is making a lot of efforts to do something new, particularly after the Note 7 debacle. Rumors abound that Samsung might be launching a foldable smartphone soon. How soon, we don’t know, but what we do know now is that they’re about to begin testing a prototype for a phone that they’ve named Galaxy X for the time being.
Chip-Monks had begun talking about this super-secret project at Samsung way back in November 2015, when we’d heard they were testing two smartphones bearing different processors. Time passed and the phones didn’t make an appearance.
Then in December 2016, we wrote about the device again, since the wind had it that Samsung would be unveiling their miracle at the CES or MWC shows in 2017.
The world then heard about this when some other websites reported that Samsung had applied for the patents for their technology. We wrote about that in February 2017, and you can read that article here.
Well, it’s time for another update.
Keeping with our first report on the matter, there’s validation that the prototype will be a foldable smartphone, with a horizontal joint in the middle of the phone. This joint will make it possible for the phone to be folded up to 180 degrees after usage. The hinge will hold together two OLED displays of 5 inches.
It’s worth noting though, that this isn’t a completely original design. We know that devices actually in the market, like the Kyocera Echo and NEC’s Medias W N-05E, have had similar designs for their foldable smartphones.
But like Apple, Samsung seems to be veering towards “being the best, is better than being first”. This has been a long term project for Samsung, and they haven’t spared any expenses in growing the tech before they launch it to the world.
They have another advantage too – they own Samsung Display – their very on display manufacturing division that can back them on all the experimentation, testing and redesign – for as long as Samsung needs. Plus there are none of the perils of outsource their research and development to third parties.
The Investor, a South Korean publication, reported that Samsung has placed orders for the production of only a limited number of prototypes, 3,000 at the most. We can expect this to be completed by the first half of this year.
“Samsung seems to be testing the waters with the dual-screen device to gather ideas about its upcoming foldable phone”.
And this is the right move too. There are bound to be some potholes and cracks that develop during testing. After the Note7 dud, Samsung will be loathe to ensure this new tech is sweated properly and all defects and opportunities so revealed be corrected effectively.
This will also allow the company to test out the potential of foldable devices. If all works out, we might see Samsung’s “newest” invention in the market in the coming two years or so.
With LG’s rollable panels in the works and Samsung’s curved displays already in the market, the South Korean tech company doesn’t have a lot of time to achieve its ambitions. While we might be anticipating foldable phones with enthusiasm, their usefulness and lifespan is still a question. Samsung will still need to add some promising new features in these smartphones to gain attention in the market, other than the foldable “novelty”.
The company has seen quite a lot of ups and downs, and is still well above water, but with growing disenchantment with “gimmicky” features on devices, Samsung definitely needs to get this right, by ensuring that the new capability comes with some uses, and isn’t developed in version 2 or 3 (like they did by releasing Edge displays that hardly did anything on the 2014 Samsung Galaxy Note edge, and only began to justify their existence with the Samsung Galaxy S7 edge a full one and a half years later!)
Don’t we all love things unlimited? Advertisers thrive by this maxim. Progress aims for it. And Technology tries to find it.
Take the example of Network Bandwidth. The spectrum allotted to us by our Wi-Fi Router offers us a limited amount of space to work with, and we have all been making do, somehow. Only – that isn’t enough any more. Data and Content are accumulating faster than we can ever imagine, less handle.
The devices that we use, such as phones or computers rely on electromagnetic waves to transmit and receive information. But Data and Content can only travel on a certain bandwidth, and it gets crowded.
Maximising the data-bandwidth ratio has been the intention of all network equipment manufacturers for time immemorial. Continuous evolution is mandatory, if efficiencies and utility are to be increased.
But, the problem has almost always been interference. And that band of baddies is becoming bigger every passing month – as more and more devices join our little networked world.
Think of bandwidth as a road. Traffic in a select direction might be orderly (like that on one-way streets), but it is not always feasible. You need incoming as well as outgoing traffic to move in tandem, in order to do anything electronically. But as bi-directional movement happens on the path, traffic jams ensue.
In bandwidth, using the same frequency (i.e. roads) for both incoming and outgoing interactions causes interference, in which both signal gets distorted.
Engineers at UCLA have designed a solution for this problem. Their research has supported the use of a circulator which could send and receive information simultaneously. The movement of information is done from different ports but they share the same antenna.
Imagine, instead of making two one-way roads for traffic, one can use a single road using tech that UCLA’s developed. The space for movement will be doubled, so can the data usage.
UCLA has published its research in the prestigious journal Nature. The research team headed by one Ethan Wang (Associate Professor of Electrical Engineering at UCLA’s Henry Samueli School of Engineering and Applied Science) have engineered this device. And what they have developed is quite revolutionary.
Unlike traditional circulators, these circulators are made to handle the data signal simultaneously. The traditional circulators have been made with magnetic materials, therefore it was quite troublesome to integrate them into a modern circuit. They can do the job, but they can’t be used universally. Other circulators are limited by their size to handle frequencies.
Wang has engineered a “Distributedly Modulated Capacitor” with his team, which is made up of non-magnetic materials such as silicon or a compound semiconductors. This helps it in acting as a traffic modulator as well as a director of traffic. It shifts the incoming information on a different frequency for processing but doesn’t change the outgoing one.
This simultaneous configuring of signals will be done by a special time-switching strategy called Sequentially-Switched Delay Line (SSDL). This is what helps in building an unlimited bandwidth solution.
According to the journal, “The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency”.
Using the metaphor of trains, Wang explained how one can switch data on- and off-track to maximise efficiency. The idea has appealed to DARPA, who has promptly granted USD 2.2 million to test and refine the development.
One can surely understand the potential of this, given that we are almost on a cusp of information explosion. The unlimited bandwidth can be the gentle but firm shove into a newer dawn.
Like BlackBerry, Nokia has great survival instincts. Considering how badly Microsoft’s experiment with Nokia bombed, a lot of us were curious to know what would happen to the once-foremost mobile phone brand.
Well, it persisted, and was ultimately acquired by HMD Global.
Considering the amount of money HMD must’ve put on the table, it’s no surprise that it’d be in a hurry to bring out some new models and begin the long resurrection journey asap.
HMD recently launched the Nokia 3, Nokia 5, and Nokia 6 at the MWC 2017. The iconic Nokia 3310 is also made a comeback in a modernized avatar.
Now, rumors are circulating about a potential flagship phone, Nokia 9.
Based on the report from NokiaPowerUser, it looks like the Nokia 9 is going to be HMD’s big bang for this year. A premium smartphone that is said to be running on an Android version that is pure, it will prove a challenge to many a brand in the higher-mid-range bracket.
First up, surprisingly, Nokia-HMD seems to have been able to lay their hands on Qualcomm’s latest Snapdragon 835 processor (the same one powering Samsung Galaxy S8 & S8+ and Xiaomi’s much vaunted Mi 6; in fact the upcoming Sony Xperia XZ Premium is also going to be riding on the same top-tier processor).
Given the quantities that these three flagships are going to be demanded in, we know it couldn’t have been easy for Nokia-HMD to get their hands on this bad boy, so we shouldn’t really blame them for making us wait a bit (as the supply chain for the Snapdragon 835 will only roll around to Nokia-HMD’s needs after the first three top-tier devices’ orders have been met. It will be launched sometime in the third quarter of the year, which means we’re not seeing it before August, at the very least.
Moving on, the camera is most often the first thing people look for in a smartphone, hence HMD’s played it smart.
The Nokia 9 will most likely have a 22 megapixel rear camera, complete with Carl Zeiss optics. And that, is always indicative of a top-drawer device. Why? Well Carl Zeiss made the lenses that were used for capturing all that mind-boggling imagery in The Lord of the Rings.
Thus, software aside, there’s not much need for me to say anything more about its imaging potential. Since the software side of things will be handled by Android, you’re good to go on that front too.
Did I mention you’ll get all your selfies at 12 megapixels?!
This brings me to the thing that I am personally most excited about – the audio on the Nokia 9.
This is going to be the first smartphone that will have the Nokia Ozo technology.
I’ve seen its YouTube demo, and believe me when I say it – this technology will make your sound experience come alive. Each distinct sound from your surroundings would be audible, in crystal clear quality. 3D audio and an immersive VR experience will seal the deal!
The phone is actually a phablet that carries the now-mandatory 5.5 inch display, which will be a QHD OLED panel. From a data privacy and device safety standpoint, the Nokia 9’s iris scanner and fingerprint sensor should keep both secure.
There are rumoured to be 6 GB’s of RAM and 64 & 128 GB storage options, which make for high-end performance and ample storage. And the 3,800 mAh battery, coupled with the Quick Charge 4 technology, should mean that you never have to worry about your phone running out of battery!
It will be operating on Android 7.1.2 Nougat, and the device’s IP68 certification for dust and water resistance will help you enjoy the phablet in every possible situation, no matter the poor disposition of the climate and other environmental elements.
The price? Well, the Nokia 9 is expected to launch at USD 700, thus about INR 45,000, but we should all hold our thoughts till the device finally launches.
What more could you possibly ask for in a smartphone?
The Nokia 9 is sure to turn heads the moment it is launched. I am waiting with a bated breath!
Apparently, Lenovo is trying to surprise customers who like to keep a tight leash on their budgets by releasing ‘in budget’ handsets. The other agenda is probably to attract first-time smartphone buyers by trying to fit their needs and also breaking the myth that cheap phones need to be bottom-run performers.
Lenovo is reportedly working with it’s in-house Moto team on a mission to create the brand’s “most affordable handsets ever”, and tread in the challenging narrow budget lane.
The company is expected to release two models – the Moto C and the Moto C Plus. The pair will have features similar to the Moto G5 and the Moto G5 Plus, but are expected to cost significantly lesser than their inspiration.
Some of the specs of the upcoming handsets have already leaked online – both the phones are said to run the latest Android OS (Android 7.0 Nougat), offer a 5 inch display and will be powered by a quad-core MediaTek processor.
While the Moto C is said to have a non-HD resolution of 854×480 pixels, the Moto C Plus will carry an HD screen (1,280×720 pixels). The phones will be available in 4G configuration and carry slightly different battery capacities. The Moto C is said to be powered by a 2,350 mAh battery while the Moto C Plus will have double that capacity at 4,000 mAh.
Conjecture has it that the Moto C will be priced at INR 5,000, while the Moto C Plus would cost about INR 7,000.
No official announcement regarding the phones or the launch availability has been made yet by Lenovo.
As we’d said back in September 2016 and even earlier in April 2016, Apple has embarked on a very serious mission – that of cleaning up it’s App Store, and by extension, improving the quality of the apps in it.
Clearly, user experience – with the Store and with iDevices, is at the core of this mission. It may not be apparent to you, but as Facebook’s app had proved, apps do far more on the device than they let on, and it is to mitigate such negative impact that Apple is taking a significant step to force developers to improve the quality of their wares, and by extension, your experience with your device.
iOS app developers have been intimated that Apple is about to completely pull support for 32-bit apps in a few months. Believed to be the bedrock of the upcoming iOS 11, Apple is moving to only allowing 64-bit apps on the Store.
This is obviously not a newfangled plan. Apple has been gradually working to this end over the last few years. This year it will take a complete and clean break from all 32-bit iOS apps which will affect approximately 200,000 apps – uprooting them from the Store, unless they are updated.
A Quick Rundown On Apple’s Move to 64-bit iPhones
Launched in September 2013, iPhone 5s was the first iDevice with a 64-bit processor. After the release of the iPhone 6 and 6 Plus, Apple discontinued iPhone 4S, so the iPhone 5C was technically their last 32-bit iPhone.
In February 2015, Apple made it mandatory for all new apps to have 64-bit support. You can see where they were going with this – slowly integrating all their hardware and software with 64-bit support. With the announcement of the iPhone 6s and 6s plus in September 2015, Apple withdrew all their 32-bit devices.
To get their Developer ranks moving to adopt the 64-bit way of line, since 2016, Apple even began inserting warning notes on each App’s detail page (on their App Store) – “This app will not work with future versions of iOS. The developer of this app needs to update it to improve its compatibility”.
Many developers read the writing on the wall and began upgrading their wares. But not all.
So Apple’s now upped the ante – with iOS 11 coming out sometime around September, the mandate is that Apple will only support the 64-bit apps.
If you’re wondering how this will be benefit Apple, even though they might lose many apps – well, there’s a fairly simple answer.
64-bit CPUs can process data quicker than their 32-bit counterparts, in addition to using RAM more productively. If a code is written specifically for a 64-bit CPU, a 32-bit CPU won’t be able to run it. On the other hand, while 64-bit CPUs are compatible with 32-bit software, the performance efficiency that is a part of the 64-bit processor is lost due to emulation. So by removing all 32-bit software, Apple is essentially improving the performance quality on its products.
Apple’s complete control of their hardware as well as their software and App Store puts them in a unique position to be able to run such heave-ho’s. And there’s good reason.
Besides filtering out their app store of all the abandoned apps, it will also free up some storage space on your devices, and you’d be able to get better experiences, more suited to the hardware that you’ve invested in. What’s the point in driving your super car in second gear?
You the hardship to Developers aside, there are plenty of reasons why this move is good for you and me.
India Needs To Befriend Electric Vehicles, And Ola's Going To Try And Convince You
India’s electric car market just does not exist.
Sure, electric cars have begun domestic use in many countries, but Indian citizens still seem wary of these ‘hybrid’ or all-electric cars for a variety of reasons:
All of the above combine to make an electric vehicle more of an impediment than a vehicle or a mode of transport, for most people.
Thus, in order to establish and demonstrate the viability of electric vehicles, the government is left only with the option of implementing it in public transport – via e-rickshaws, cabs and buses (which would need to be supported by state and local governments along with subsidies from the central government).
But there are problems in that segment too. Most commercial vehicle owners would not make the jump voluntarily –
All said, the Electric bug has not bitten India, consequently India has not been able to impress any domestic or international investors, due to extremely poor demand for the White Elephant product. Thus, there aren’t too many options either – not too many brands make electric vehicles – hence the chicken-and-egg causation continues.
But governments are a persistent creed. They still have a mandate to achieve – to ‘Go Clean’ (energy) by 2030.
Seeing the poor sentiment from consumers, maybe trying another tack like the adoption of electric vehicles in cab services could change the perspective and demand in the market. To that end, they’re waving some carrots at cab operators.
Ola, India’s largest indigenous cab service is poised to introduce at least one million electric cars in a 5 year timeframe – the first of which is scheduled to be introduced in Nagpur, very soon.
Given that the company is fighting an intense domestic battle with American cab-hailer Uber, it is undoubtedly using the sops and subsidies provided by the government to be prop up it’s balance sheet.
I sense caution in their countenance.
Ola is backed by Japanese giant, the SoftBank Group. Now, while its representative, Masayoshi Son had suggested in December that Ola would introduce around a million cars which ran on batteries charged by electricity in the next 5 years, yet Ola’s CEO and Co-Founder, Bhavesh Aggarwal at another time said that while the prospect would be followed up on, they would be cautious with the spontaneity of the introduction, solely dependent on the Indian consumer’s requirements.
However, this cautious approach might be side tracked if Ola could find support and partnership from the Central Government, as it did with e-Rickshaws. While that partnership did not quite result in long term sales for Ola, but it surely provided a platform for electric vehicles market in India, and was noticed for it, far and wide.
Reading between the lines, I can only summarise with a supposition. Ola’s management doesn’t mind experimenting, but it definitely does not want to commit profitability (and thus, viability) at the behest of a government program.
And it seems I correct in my premise.
Aggarwal, in a media conference was reported saying “Electric vehicles could transform transportation completely in India by the methods of lowering the cost of operation and ownership”.
If that is not cautious optimism, I don’t know what is.
And I cannot fault the man. It’s good to be cautious at a time that you’re draining money on the one hand, trying to win a war. You must learn from other leaders – like Tesla.
Tesla’s repeated rejection to set up a plant in India summarises the plight of the domestic market.
Despite “experts” suggesting that Tesla is going to set up a plant in India “soon”, the company itself has repeatedly denied any near-term plans for a plant in India.
This, despite visits to it’s California plant by India’s Prime Minister and our Transport Minister during their own trips to the U.S., and assurances of land being provided near a port for prompt exports.
Despite all this wooing and sops being offered on a platter, the world’s most renowned electric vehicle manufacturer has still not budged. Clearly, Tesla believes that the electric revolution is still some time away for India.
Tesla however has opened up to providing a pan-India supercharger network in India, which is a positive sign if not the sign the market was hoping for.
The Government’s plans
The Government has a very ambitious plan (see, I can be diplomatic…) of making India a “100% electric vehicle nation” by 2030. It wants to see 6 million electric- and hybrid vehicles on the roads by 2020 under the National Electric Mobility Mission Plan (NEMMP) and Faster Adoption and Manufacturing of Hybrid and Electric Vehicles (FAME). Ahem.
The automobile industry in India is currently a $74 billion industry and is likely to be the third largest automobile industry by the end of this fiscal year. So there’s plenty of money on the table.
Starting with a large-scale cab company like Ola makes immense sense, as Ola’s experience and fillip to profitability might also motivate other cab service providers to start using electric cars and finally, domestic consumers.
There’s some time to go before we see the climax to this particular story, but we really hope that India does ‘Go Clean’.
Apple AirPod Cases To Become Portable Chargers In Another Pioneering Move
Apple has proven itself to be a true master of innovation time and again. So many new concepts were first introduced to us by Apple, such as the iPhone, the Mac, iPads, and more – even the icons we see on our PCs seem inspired from Apple’s design principles. The smartphones and tablets we use on a day to day basis now would never have made their way to us if not for Apple’s iDevices entering the market. And not to forget, removing the headphone jack from their phones is another pioneering move by Apple.
They’re not about to ever let us forget just how creative, resourceful and original they are; but they’re so well tuned into consumer needs, that they cannily include capabilities and then surprise us, leading to an “Oh yeah! Of course!” moment.
So hold your breath, because they’ve apparently come up with another one of those epiphany-inducing ideas. The case that holds the AirPods, could soon become a secondary charging port for several other devices in the near future.
A smart move too, taking into account the rumours about iPhone 8 having inductive wireless charging.
Patently Apple found a patent accorded to Apple in March-end, in a rather large pile of 250 patents, which conceptualizes the designs for this next-gen AirPod case.
Their conception is of a wireless power transmitting component embedded into the AirPod case. When placed flat somewhere, the case will then become a charging port for an external device, in addition to the AirPods that are charged internally. And voila! You would be able to charge two of your Apple devices at once, with just an unassuming little case.
The same patent also gives a broad list of the kind of devices that would be compatible with the case:
“Such devices can include, for example, portable music players (e.g., MP3 devices and Apple’s iPod devices), portable video players (e.g., portable DVD players), cellular telephones (e.g., smart telephones such as Apple’s iPhone devices), video cameras, digital still cameras, projection systems (e.g., holographic projection systems), gaming systems, PDAs, as well as tablet (e.g., Apple’s iPad devices), laptop (e.g. MacBooks) or other mobile computers. Some of these devices can be configured to provide audio, video or other data or sensory output.”
Not only this, the upcoming products will also be equipped with a number of sensors that could help the case in recognizing the presence of other devices. These would be a mass sensor, mechanical interlock, hall-effect sensor, and an optical sensor.
The only real question left to ask is how much power these cases will be packed with.
But hold on, I’m not done surprising you yet!
There is also a strong likelihood of the future cases being waterproof. This is the cherry on top. I mean, not only will the case be a portable charger for most of your Apple devices, but you will also be able to take it with you in wet places. It makes sense to believe that the AirPods themselves might be waterproofed very soon. And although we might have to wait for a while to get our first glimpse of them, never ending fun is what these cases promise us.
Bendable devices have been long time coming – with brands like Samsung, LG, Microsoft and even a little known brand, Moxi, rumoured to be readying various forms of smartphones and wearables.
Each of these brands obviously estimate that the Next Big Thing is going to be a device that can be folded or bent, to offer more utility and durability.
Now, some recent news on the matter is fanning the flames of the rumours some more. The rumour mill has it that Samsung has now developed technology to create a graphene-based storage chip.
This is an important milestone – because for a device to bend, all of it’s internals must support such adventure. Hence, each of these “internals” must be developed with that new personality in mind. And that will call for some innovative approaches and materials.
Most of us tend to think of memory as an abstract thing in most cases, not realizing that for the software on the phone to run, there needs to be a hardware component to enable the memory on which the software would run.
The current devices use what is called the flash memory, which is not made up of flexible material, and thus would not be well accommodated in a bendable device. Graphene is a flexible material and can bend as the phone bends, which makes this development key to the development of the impending bendable smartphones.
One of the most promising materials that will assist flexibility is graphene. We’d written an absolutely brilliant article explaining what graphene is, and I highly recommend you read it to fully grasp the concept.
Graphene being a strong conductor of electricity and given its bendable and flexible attributes, it is most likely to feature in the coming revolution of smartphones.
A Graphene based bendable memory chip not only provides the necessary flexibility, it also frees up some critical space for the manufacturer.
Given that its length is a mere 50 nanometers and its thickness is 8 nanometers, the chip will provide Samsung with a little bit more space to work with, and to shoehorn more battery or additional hardware.
But that’s not all! This hybrid oxide-titanium oxide memory chip only requires 5 nanoseconds to boot, write and read data. As smartphones use electricity to synthesize its processes, the graphene-based memory is ideally suited for them.
Given that Samsung has already made a strong investment in Graphene and has even been granted a patent for it, the previously-agreed partnership between Samsung’s Advanced Institute of Technology (SAIT) and Sungkyunkwan University is likely to bear fruit soon.
We can thus safely assume that Samsung plans to utilise this technology somewhere, sometime soon. If reports are to be given any cognisance then it is most likely that Samsung would release its first smartphone with a bendable display is 2019. Speculations have already been hinting that the smartphone is to be called Galaxy X and will most likely feature a flexible OLED display.
If this move is successful, it is bound to bring bendable devices closer to reality which might serve to be a breath of fresh air in the current market.
Meanwhile, back to our very straight and stiff devices for now! We’re crossing our fingers and hoping for Samsung’s bendable smartphones to soon be a reality!
Back in October of 2015, Microsoft released its Surface Pro 4 tablet, which received a lot of critical acclaim for its extremely bright display, its compact physique and its efficiency. The fact that it enabled screen touch operations coupled with a stylus and a keyboard, went down well with its consumers and it became a hot favourite in the market for office-goers.
About one and a half years since its release, Microsoft is set to unveil its upgrade, the Surface Pro 5 model at a mega event, soon. However, the features on the new device might not set it very far apart from its predecessor.
If the leaks and speculations for the new tablet are to be believed, the upgrade might not live up to the trajectory of evolution that the Surface Pro 4 had triggered over the outgoing Surface Pro 3.
The ‘changes’ (if we can call the leaks that) aren’t as monumental as one would expect from a giant like Microsoft. One can only hope that Microsoft has something else up it’s sleeve, if they’re to maintain the momentum and the fan following that the Surface Pro line has been able to garner. Pro-level iPads and a new Pro tablet from Samsung, the Galaxy Tab Pro S do have a lot to challenge the Surface Pro with.
Here is what we’ve gathered about the rumoured Surface Pro 5 yet.
The display is set to be similar to the Surface Pro 4’s. That said, the resolution is set to beef up to a 4K Ultra HD resolution.
The size of the screen will most likely be close to 12.3 inches, much like its predecessor.
Why is this modernisation not enough? Well, given the rapid expansion of phones, and tablets replacing laptops, other features like 3D sensors or 3D screens would have been an upgrade to look out for.
Let’s move onto the processor. Well, this bears good news. Intel’s Kaby Lake processors will feature in this model. The processor would be a serious upgrade over the i7, which till date, impresses first time users. The Kaby Lake processors are state-of-the-art chipsets which, will surely fare much much more effortlessly, even with growth in app-requirements (that are expected to happen in 2017-18).
Moving onto the rejigs, Microsoft is looking at versions of the Surface Pro 5 which will allow mobile phones to tether and provide wireless data to the laptop. This would be a significant upgrade in the connectivity arsenal as it would address an important criticism that the Surface Pro 4 faced throughout it’s life.
Along with that, the Surface Pro’s stylus will go through a smart upgrade, as it may feature a wireless charging feature.
The memory is decent when it comes to the nature of this device. It is most likely going to offer a 16 GB RAM coupled with a 512 GB of onboard storage, which is not bad at all.
Finally, analysts recently suggested that the device would retain the Surface Connect power connector and will not move onto the USB C connector.
Here is the issue. As mentioned before, with the rate at which mobile devices are sporting upgrades, the fuss in the market for the Surface Pro 5 may be timid given the minor changes.
Considering the hype surrounding increasing larger and more competent smartphones like the iPhone 8 and Samsung Galaxy S8 that are doing to tablets what tablets themselves did to laptops – becoming crossover business-capable devices, it is important that these hybrid tablets like the Surface Pro 5 offer more to the consumer.
We might slowly be drifting to an age when tablets too will lose their efficiency and prestige. While the performance is surely going to improve as a result of the Kaby Lake processor, what seems to be a hindrance is that in the long run, tablets might just be used for gaming and graphical, editorial processes unless they improve on features.
Slow Wi-Fi has always been a cause of concern for people making use of the internet through wi-fi pods available at public locations, at home or at work. Unfortunately, too much traffic and load, has quite often resulted in a slow, congested wi-fi service, which not only becomes an impediment for work, but in many cases, meddles with people’s right to receive important social services. Given the recent app revolution, emergency services apps are also receiving traction along with apps that make everyday life easier. A roadblock in the form of a slow/bad wi-fi service can exacerbate issues which require immediate supervision and action.
However, this might not remain a problem any more. Well, call us soothsayers, we’d first written about Li-Fi back in February of 2013! We’d educated our readers to this novel and extremely exciting tech that was being developed as a joint venture between the universities of Edinburgh, St Andrews, Strathclyde, Oxford, and Cambridge. An extremely elucidatory article, I highly recommend you read that one first, before proceeding with this current “update”.
The tech was exciting enough for us to research it over and over. We wrote about it again in January of 2016, when Velmenni an Estonian company had conducted some detailed tests on the tech. You should read that too, to know why Li-Fi is possibly the answer we’ve been looking for, for our networking woes.
Obviously, we aren’t the only ones besotted by this breakthrough technology.
Researchers at Netherland’s Eindhoven University of Technology have been working at Li-Fi and have developed a newer version of the wireless Internet system, based on harmless infrared rays which will not only make the internet experience smoother for the consumer, but also increases speeds a hundred times of what’s available these days.
The scientists expect internet reception speeds to reach around an astounding 40 Gigabits per second. The wireless service also does not require the user to share the service, since every device has its own, independent ray of light to connect to the wireless server!
How does it work?
The system is very simple and cheap to set up. A few central ‘light antennas’ are the source of the wireless data, which very efficiently and precisely, divert the light from any light source which is supplied by several communication and electronic devices like phones, tablets and desktops etc. The antennas comprise of a pair of gratings, which radiate light rays of different wavelengths at different angles. The direction of the ray of light changes with a change in the light wavelengths.
It is because of this reason, that the term ‘Li-Fi’ (Light instead of Wireless) is being coined for this new wireless internet system.
The radio signals transmitted by the network constantly track every device by receiving a radio signal in return. If a user walks around with her device and escapes the light antenna’s line of sight, another light antenna picks up the infrared ray to provide an uninterrupted service.
There’s another benefit – every time a new device connects to the network, it does not require you to share wireless bandwidth because each device is assigned a different wavelength. So there’s no splitting or sharing of bandwidth in the classical sense.
Apart from that, in the current wireless disposition, radio signals operate in frequencies between 2.5 or 5 gigahertz. Whereas, the new Li-Fi system utilises infrared light with wavelengths of 1,500 nanometres or higher that provide data through frequencies of around 200 terahertz, which would be an unbelievable improvement.
The technique is also harmless, given the fact that safe infrared wavelengths are used, it would not cause any health problems.
Despite all the progress, and intensive testing conducted over the last 3-4 years, Li-Fi is expected to take another 5 years to hit the stores.
This system might turn out to be a blessing for developing economies who are consistently employing the use of digital and internet based solutions. It is a cheaper and a more efficient solution than regular radio waves that are becoming more and more congested. In fact, once of my favourite benefits of this new system (other than the superb speeds, of course), is the fact that routers and network stations will not crowd the environment outside the home or office. Being restricted to areas inside the home, one won’t get to see the three million other internet connections in the vicinity and the router won’t be adding more traffic into an already crowded environment.
Therefore, homes, offices and public services would find a significant boost in efficiency along with more accessibility. And won’t that be a blessing!
Control Your Phone Through Earbuds That Can Detect Facial Expressions
One of the things I like most about technology, is that it is a great enabler that is capable of serving all humans equally.
From providing help to the visually impaired to helping people with speech disorders, technology has always been a boon to people with disabilities.
Watches like Dot, the world’s first Braille smartwatch, help the visually impaired interact with the watch and even deal with notifications on their smartphone, the same way you and I can.
An app named Talkitt helps convert hindered speech into understandable vocalisation so that one can understand what a person with speech impairment is trying to convey.
The Sesame phone is a smartphone made for people with limited mobility – as it tracks small movements using its front facing camera, and facilitates processes and interactions on the phone without even touching it!
Technology has updated itself time and again to meet the diverse needs of people across regions and physical abilities. And it is an ode to all those good people out there who are considerate towards such diverse needs and enable our brethren in this fast-paced world.
Let me introduce you to another such invention aimed at helping people do more, without doing more.
The technology that we shall focus on in this article deals with the science of Gesture Control.
Gesture Control is a relatively new feature which is being explored and integrated by smartphone- and technology companies around the world. It aims at observing and interpreting simple gestures and converting them into programmed actions. Best example? The game console called Kinect (from Microsoft) that enabled players to do everything through gestures – turning up the volume, changing games, even playing the games – all without physical controllers.
Now, Computer Interaction researcher, Denys Matthies has taken this science down a completely different route.
Matthies is in the process of creating earbuds that would allow the wearer to control her music, open text messages and give commands to her phone simply by detecting her facial expressions! All through… the ear canals.
How does this work?
Well the logic behind this is actually a revelation. A person’s ear canals move simultaneously with her facial changes. The contours within the canals respond to every single minute change in the wearer’s facial expression, thanks to all the muscular reactions that enable that expression.
These change are precisely detected by these earbuds and are converted into commands for the smartphone paired to the earbuds.
How efficient are these earbuds?
Matthies says the earbuds are highly responsive. They can sense if the wearer is smiling, winking, making a “shh” sound, opening her mouth or turning her head – with a whopping 90% accuracy!
The earbuds come with special electrodes that create an electrical field that makes all the magic happen.
Using the electric fields so generated, the earbuds are able to detect the bends and flexes of the ear canals as expressions are formed on the wearer’s face. Once detected, the algorithm (and perhaps actions personalised through an app on the phone) interprets them, and can then perform consequent actions on the earbud and the paired phone.
Thinking about it, I realise, not only is this innovation a boon for the differently abled, but will also cater well to the commercial market.
One could play and pause music, answer a call, open a text or simply click a picture – all that without lifting a finger (metaphorically… of course you’ll have to hold your phone up to take the picture!).
With these features and a smart phone capable of deciphering these signals, the entire technological world would go through what could be called a “handsfree revolution”.
What do I love the most? There’s another reason for me to smile more through the day!
We’ll let you know when these earbuds become available commercially.
People say Apple is ruthless when it comes to ditching companies – we think it’s actually more attuned to its own needs, and when the time comes that it’s current crop of vendors can’t keep up with Apple’s exacting requirements to empower it’s products, the tech behemoth doesn’t dither in setting up a new team to design and create the hardware it needs.
The latest announcement that Apple’s decided to build its own graphics chips within the next 15-24 months. is a prime example.
While the implicit aim is being cited as Apple’s desire to halt its dependence on Imagination Technologies – the company that currently manufactures the graphics processor used in Apple devices like the iPhone, iPad and even the iPod Touch – we at Chip-Monks believe there’s a more fundamental reason, than even cost.
But why did Apple make this move?
Apple had earlier made a similar move with regard to the central processor used in its devices. The company had dumped the PowerPC CPUs when Intel’s X86 silicon was picking up pace.
We believe that apart from ending its reliance on Imagination Technologies, the independently-made graphics processor would also allow Apple to build chips that perform better around the iOS and will be far more efficient on the battery.
For a company that has long been reputed as building their hardware around their software, unlike every other tech company in the world, surely, this plateauing of processor performance would not be acceptable. And truth be told, almost everyone of us realises that if you want to build something really well, usually the best thing to do, is to build it yourself.
Well, Apple certainly has the resources, and the desire to achieve that!
What is Imagination’s reaction to it?
Imagination is currently playing the violation-of-Intellectual-Property-Rights card.
The company claims that the technology required by Apple to make the chips would violate, “Imagination’s patent, intellectual property and confidential information.” Imagination further adds that Apple won’t be able to perform the intended task without causing breach in Intellectual Property Rights.
No doubt that Imagination is crestfallen, literally – working with Apple is an unparalleled feather in any hardware manufacturer’s cap. But even more famous, is a loss of such accolade.
Apple is also determined to hold it’s customers’ value more than its own profitability – that’s clear from the fact that Apple owns 8% of all shares in Imagination Technologies Following the announcement the share value of Imagination has declined by about 70%. And Apple doesn’t seem to be minding that loss, because it’s focussed on something more important to it’s core value – Customer Interests.
More Work On Creating A Self-Healing Material For Your Smartphone - All Hail Wang!
If like me, your phone tends to meet the floor regularly, then you’d probably get a bit of a smile on your face when you hear that soon you’d will be able to drop you phone without much care, and definitely minus that mini heart attack that usually accompanies the battle lost to gravity.
A scientist at the University of California at Riverside has found a way to save us all from the mess of these melodramatic emotions – he’s invented a material that can actually repair itself – and the first application he’s envisioning for it is… smartphones.
Chao Wang, heading the Self Healing Material research, was inspired by Wolverine’s innate abilities to resurrect himself, “He could save the world, but only because he could heal himself. A self-healing material, when carved into two parts, can go back together like nothing has happened, just like our human skin. I’ve been researching making a self-healing lithium ion battery, so when you drop your cell phone, it could fix itself and last much longer”.
Well, all things considered this self healing material is no less than a mutant-like technology, so the inspiration makes sense.
So how does it work?
The material is formed by stretchable polymers that can not only be stretched upto 50 times its size, but after being torn in two, it can heal itself back in 24 hours!
Wang has made use of ion-dipole interaction – which is the force between charged ions and polar molecules caused by the attraction between molecules and ions – leveraging it to enable the material to heal itself.
That’s not all. The self-healing material can generate current, however it cannot conduct electricity. This is makes it suitable for touch screens, batteries and other parts of the phone.
In fact, those of you who’ve kept up with technology would recall the LG G Flex could similarly rejuvenate it’s back cover after dings and scratches – well, the material used there had similar properties as Wang’s invention.
While Wang’s material has undergone various tests and been found to be successful in healing itself from cuts, scratches, it isn’t yet perfect.
Tests yielded issues with the material in humid environments. Per Wang, “Water gets in there and messes things up. It can change the mechanical properties. We are currently tweaking the covalent bonds within the polymer itself to get these materials ready for real-world applications”.
So, how soon will this be available to the people?
Although the material seems “far away for real application” but Wang is confidently stated “within three years more self-healing products will go to market and change our everyday life”.
So continue to be careful with your phone, till 2020 or thereabouts!
Apple Patent Application Alludes To The Touch Bar And Touch ID On A New Magic Keyboard
It really excites people (including me!) whenever Apple is awarded a new patent. Patents inherently give us a sneak peek into the minds of those secretive folks at Apple. And this is Exciting!
Even though not all its patents transform into reality – but one can hope.
Apple had introduced the Touch Bar last year, in its upgraded MacBook Pro laptop. However, other users (desktop and even other netbook users) immediately felt envious – as they couldn’t take advantage of the new capability even though their machines were quite capable of leveraging it.
It appears that Apple is finally attempting to address the grievances of its users.
“One of the comments people have been making about the Touch Bar on the new MacBook is that it will be irrelevant to those professionals who mostly use their Mac at their desk with an external monitor and keyboard. Unless Apple can offer an external keyboard with a Touch Bar, the feature may not see much use”.
The language of the patent is as deliberately difficult as ever:
“In some embodiments, the device may also include a processing unit positioned within the housing, and a primary display positioned at least partially within the housing and configured to display a graphical-user interface executed by the processing unit. In some embodiments, the display is an organic light-emitting diode display”.
In English, reading into the above text, we believe Apple has indicated at the possibility of a keyboard with a Touch Bar built in it. The electronic device may even be just an external keyboard like it’s Magic Keyboard; in fact images from Patently Apple point at the possibility fairly strongly.
And there’s more! Not only Touch Bar, we believe the upgrade may also add another really cool feature – the Touch ID fingerprint sensor (as those which are used on the iPhone and iPad).
What does that mean? Well, it means that soon iMac, Mac Pro and MacBook users would be able to use the Touch Bar functionality and unlock their machines via their fingerprints!
While such a device is yet to be announced, we feel confident that you might soon see such a device at an Apple Store near you. So, if you’ve been waiting to buy an external keyboard for your MacBook etc.,, then I suggest you hold off for a bit, my friend!
Tesla has pretty much had the high-end electric car market to itself since it burst upon the scene in 2008, and that certainly has been surprising. In a market where companies spring on up, like mushrooms, no one has yet been able to match up to what Tesla has been delivering for almost a decade.
Tesla’s Model S has actually ruled roost for nearly five years, while on the other hand, potential competition are still stuck in the prototype stages.
But that might be about to change.
Formerly known as Atieva, Lucid brought a prototype of its first model to Washington D.C. in the United States, this week, for people to check out.
One of the first thing that commands attention about the car is its compactness. This comes from, as Lucid’s CTO (and former Model S chief engineer) Peter Rawlinson explains, the goal to be of a similar size to the Mercedes-Benz E-Class on the outside, but with S-Class-beating space on the inside.
The car, by the way, is called Air.
Quite like the Tesla Model 3, the Lucid Air lacks a traditional grille, which may be off-putting to some. But the cooling needs of electric cars are different from those of the ones with combustion engines, so that’s that. However, the car still includes a smaller air vent below the headlights, which means the front of this one is less alien looking to that of Tesla’s upcoming beauty.
One of the features of the electric car that’s worth showing off again and again, is the reclining back seat. There are three different seat settings: upright, slightly reclined and fully reclined. You can go pretty much horizontal in the back seat of the car, if you’d like, and perhaps star gaze.
The reclining feature also helps highlight the Air’s unique windshield-sunroof hybrid. The windshield extends over the roof of the car over a foot past the point where windshields normally end. This is another feature that seemed to be borrowed a little from Tesla. So, it is quite likely that they might be facing the problem of the too much sunlight that Tesla has faced.
Gonna distribute free sunglasses to owners, are you too Lucid?!!
Well, they might be working on a solution already. Lucid has a pretty killer solution to the too-much-light problem: the Air will come with a canopy roof option that uses electrochromic glass that can adjust the amount of light that enters. This is similar to what Boeing uses for its 787 Dreamliners. Bringing out the big guns certainly!
That’s not the only big gun, though. Another big gun that Lucid has up it’s sleeve is that Air will come with all the hardware necessary for full automation cameras, sensors, etc.
The catch, however, is that, quite like Tesla, the software won’t be available until that time when Lucid determines the technology is advanced enough to activate self-driving capabilities.
Come on Lucid, come on! We have our fingers crossed.
The first model of the Lucid Air (with or without the automation) is expected out in 2019. it is a rear-wheel drive electric vehicle with 400 horsepower (298 kW) and a range of 240 miles on a full-charge. The Lucid Air is said to be targeting a price of USD 60,000.
But…. hold your horses just yet – that is not the version of the car that will make most noise!
The company reportedly plans to offer two versions of the car – one with a 100 kWh battery pack, and the other with 130 kWh unit. The latter will have a range of 400 miles.
An electric car that goes up to 400 miles, now that would be a wonder!
For that to properly work, however, the cars would need charging points of higher wattage. Recent investment by companies like ChargePoint should mean a network of higher-kW DC Fast Charging options should be in place by the time the cars actually are ready for the people.
The battery set itself is another highlight of the electric car. The batteries that Lucid’s going with use chemistry developed in collaboration with Samsung, with a focus on being tolerant to repeated fast charging cycles.
This becomes important when you move to markets like China, or India, where the users might not necessarily have the option of a garage to charge their vehicle overnight in.
With things like these, newer, and better opportunities for the vehicle seem to be opening up.
Interestingly. Rawlinson, the man behind the car, has respectfully been disagreeing with those who describe the Air as a Tesla-killer. He believes that the car is more likely to compete with luxury coupes like the Audi A6, BMW 6 series and Mercedes-Benz CLS class.
“We think we have one car that can disrupt a complete set of vehicles”, Rawlinson says.
“The goal was to build a vehicle that’s easy to park and store in an era of limited space, especially in urban environments where that space is a premium. A more compact car is a more pleasurable driving experience”, Rawlinson adds.
Given that it’s developing the motor, transmission, batteries, electronics and software in-house, it’s impressive where the company has reached. Yet, per industry watchers, Lucid still has a long way to go, though. And given that their self-imposed timeline is 2019, Lucid’s going to be busy bunch!
For many, what makes this entire gamut even more interesting is the fact that Lucid is backed by the same Chinese billionaire who backs Faraday Future, another company which has the FF91 under testing, as potential competition for Tesla. Two horse to challenge a runaway thoroughbred? Interesting!
Expect A Reboot On Apple TV and iTunes Movies - Apple's Just Landed A Big Fish!
Apple never goes out of the news. But it is making too many headlines lately, isn’t it?
With the aim of improving its user experience in the ever-improving video content space, Apple has hired Shiva Rajaraman, a stalwart of the field.
As per The Information, it’s received reports that Apple has hired the former YouTube and Spotify executive to better its own video and music efforts.
The man of the hour, Rajaraman has an awe-inspiring profile. He’s worked at senior positions in Google, YouTube, Twitter and then Spotify (where he was last tasked as the Vice President of Product).
Rajaraman would be given the responsibility to improve Apple’s video offerings and other media products like Apple Music with the clear intent to compete with Spotify.
Apple’s CEO, Tim Cook is a shrewd one – he (and previously Jobs) realized the importance of entertainment and related areas – which they both considered to be the major tools for future revenue growth (especially as hardware sales slow over the years). Hiring Rajaraman appears to be a definitive step in this direction.
As per the Information we’ve pieced together (since Apple’s not the most open with such information), Apple’s Eddy Cue, Senior VP of internet and Software Services, will supervise Rajaraman on the various projects that will be assigned to him.
Apple has not been able to come up with an effective, clear strategy regarding its video arsenal. The lack of consensus over one particular strategy regarding videos has lead to frequent debates. Even Apple Music hasn’t really hit the right chords, pun intended.
Interestingly, Apple recently announced two new shows – “Planet of the Apps” and “Carpool Karaoke”. The latter will be launched next month in the new Apple Music section called “TV & Movies.”
This desire to usher into the field of original content is aimed more at getting the upper hand over Spotify, than to yet provide competition to Netflix and Amazon – both of whom spend a billion bucks on the content.
Rajaraman thus, has an important role to play which is in keeping with his previous spot in the lineup. He’s aided YouTube in writing up content partnerships with the likes of Disney. He did something similar at Spotify by helping the company get licensed video content from Disney, Time Warner and NBC.
In the past few years, Apple has failed to reach an agreement with cable TV networks, stalling the much-rumoured streaming TV package (a la Netflix).
However, with this move, and it’s strides in the realm of original content, Apple seems to be rearing up to give Spotify (and subsequently the Big Two) a tough time.
We’re loving it! Who doesn’t love good content?!
Unless you’re an avid watch collector you’d agree that wristwatches (in their current avatar at least) are living on borrowed time.
The wristwatch is one of the select few objects, that still lives in public memory and reality -despite courting obsolescence.
Yet, for their nemeses – smartwatches – one has to forgo the very question of pragmatism!
A smartwatch is still a template of privilege, one that thrives not on any distinct capabilities, but on how many devices it can emulate (read: copy), not replace.
It is hard to tell if this is a troubling prospect or an encouraging one – the timeline of progress dictates the old to make way for the new. But if the new arrival is just an amalgam of the old, then it is novelty, not progress.
Whatever popularity (can’t call it success, yet) smartwatches have experienced, is not because they’re ingenious as a product, but because they’ve been able to act as a probable all-in-one solution for modernism.
For now, a smartwatch can at best serve as a makeshift backup option. OK for something, but not good for everything.
And thus, the quest to find a smartwatch’s USP must go on.
Someone at Chip-Monks termed smartwatches as “razzmatazz – whose time has not come”. I tend to agree – the smartwatch may have some reason for existence in the future, but at the moment, it doesn’t really make a spot for itself in the world crowded with devices and wearable fitness trackers.
Consider this – they are threatening to act in the same manner as smartphones.
A smartphone replaces the need of a stopwatch, timer, wall clock, thermometer, weather map, GPS and what not. In turn, a smartwatch aims to replace the smartphone itself. But it seems highly improbable, till the solution of a flexible display is found.
Different companies are working to that end – LG, Samsung, Microsoft and even a little-known entity called the Moxi Group from China. Each have their own novel approach, use different materials and perhaps have different outcomes in mind.
Samsung is one of the companies that has been working on a flexible displays for the longest period. They’ve now come up with a new approach to a smartwatch, and the prospect seems to be workable, in theory. We heard about this approach through a patent application that Samsung filed, titled “Display Device And Smart Watch”.
What is interesting about Samsung’s proposed watch is that it is made up of not just one but two displays. The primary display, a round screen dispenses the generic functions, while the second display is built around the rim of the watch. This means that there’s no real bezel on the watch (scratch alert on!)
As per speculations, this secondary display will use it’s ribbon shape to carry specific information that doesn’t need to be portrayed on the main display.
The big benefit being that the user would not need to turn on the display to view critical information like the time – she could simply glance on the rim of the watch! Other information that could be displayed here may be about the weather, date or notifications.
I am reminded of the edge display that Samsung’s current flagship smartphones carry – this ribbon display could well carry similar intimation-related info, and not so much interactive information.
But, as we mentioned at the top of this article, this is seems like another attempt to mimic and replace the smartphone. But the intent is baffling, for most of the users consider the smartwatch as a device of respite, a step away from their phones. Adding more and more smartphone-specific features belies the minimalist benefits of a smartwatch – and may make the choice confusing. At best, it mayn’t serve that basic purpose of respite. Much like the Yotaphone with the e-ink screen at the back – while it did something new, it didn’t do anything that we really needed or wanted, or were missing. Hence it never really went anywhere as a product, and disappeared sooner than it appeared.
The secondary display is a novel concept, we cannot deny it. But in the end, it is just a concept. The patent has been filed, but it is highly possible that this concept might not be a part of the production process anytime soon. But it will be highly interesting for us to watch the trends, eh?
Have you ever thought of your favourite virtual haunt, Facebook, as a place where you could find your next job? No, right?
Well, Facebook already is host to a plethora of businesses – 65 million to be precise – that use its Pages product to showcase their wares and communicate easily and quickly with their customers.
Facebook maintains that it is more economical and sociable than maintaining a website – and that may well be true. Especially considering that having a thriving presence, sharing photos, content, customer citations and even new offers is just a matter of a few clicks. Also, unlike a website that needs people to visit it, Facebook takes your business to them, in a place that they spend a lot of their day consuming content.
However, Facebook has just begun to plug a gap that many business owners and employees have both been secretly hoping for. Facebook is rolling out the ability to list and apply for jobs – in a manner akin to services like LinkedIn, Indeed.com, Monster.com and Glassdoor – all through your personal (or your business’) Facebook page!
LinkedIn has dominated the employment scene ever since it launched 14 years ago – but has suffered slight setbacks on two counts.
LinkedIn has been unable to cater to two kinds of people – one, the lower-skilled workers and two, people who are not actively on job hunting spree.
“Two-thirds of job seekers are already employed,” says Facebook’s Vice President of Ads and Business Platform, Andrew “Boz” Bosworth. “They’re not spending their days and nights out there canvassing for jobs. They’re open to a job if a job comes.”
It seems Facebook has exploited these two vulnerabilities that LinkedIn has forever been saddled with. And with this, Facebook is poised at changing the entire game – writing new rules for it, by playing to it’s inherent strengths.
Excited? Well, Facebook is doing this strategically. It recently rolled out the new Jobs feature for users in the United States and Canada, enabling companies to post job openings either on their own page and/or on a new jobs page for free.
“Today we’re taking the work out of hiring by enabling job applications [directly] on Facebook. It’s early days but we’re excited to see how people use this simple tool to get the job they want and for businesses to get the help they need,” said Andrew Bosworth, the company’s vice president of business and platform.
Interested candidates would then be just a click away from potential employers as they can simply click the “Apply Now” button right on Facebook.
That done, their application will be sent through Facebook Messenger with Facebook having pre-filled the form with information like the applicant’s name, education background on the basis of user’s public profile etc.
That’s not all! Conversations between the parties could even happen directly through the Messenger, if so preferred. Though chatting with your potential employer through an instant messenger sounds a little unprofessional, it is in keeping with our times whereby texting is the most favoured mode of communication.
Job seekers can filter their job search as per factors like city or area, full-time or part-time preferences, and type of work.
Interestingly, for now Facebook is charging no fee for its service of advertising job positions or filling up forms for potential employees.
If you are a keen observer, then you might have noticed or known that Facebook was beta testing this feature of seeing job ads on Facebook for quite some time and now since it is rolling out the feature to US and Canada, it is pretty clear that the feature transpired to be pretty successful in its testing stage.
This feature will definitely bring in revenue to Facebook as businesses can pay to transform these posts into ads so that it can gather maximum eyeballs as the ads appear in the News Feed of a lot of people. Also, if users re-share the job vacancies to their friends or simply tag their friends in posts, it will end up garnering attention from a lot of people.
Facebook seems to be interested in roping in the business users and has been working in this direction as it has been pushing Facebook Workplace to its business users.
Facebook’s pitch of reaching millions of its active users who are looking for not just full time jobs but freelancing or part-time jobs looks imminently possible, as a lot of users come to Facebook every day for various reasons like infotainment and bragging to the world about their latest foreign trip.
There’s a catch with this entire matter – but I’m going to discuss that in a future article.
And no, I am not talking about the underlying assumption that users would like to use the same platform for serious stuff like looking for a job.
There’s a significant reason why some people would continue to head to to websites like LinkedIn to land up their dream job. We’ll cover this soon, promise.
LinkedIn dominates the employment scene but its 467 million user count definitely falls short in front of Facebook’s 1.86 billion active users. Add to that Facebook’s appeal to middle skilled or lower skilled workers – is definitely a mountain too tall for LinkedIn in it’s current avatar.
Whether this is the perfect recipe for success or not, only time will tell. That said, get your CV polished up… and your Facebook profile 😉
Earlier this month DJI filed a trademark application for their new drone, which will reportedly be called Spark.
At the same time some photos leaked online, which suggest that Spark could be much smaller than it’s predecessor, the already-quite-small Mavik.
However, it appears that unlike Mavik, the Spark will not have foldable legs.
Based on the images it also appears to feature some form of ground sensor system. A ground sensor system could be used to help aid with automated take-off and landing, as well as for position tracking. From the looks of it, the camera on the front might be a 2 axis gimbal.
The bottom cover of the drone is removable and exposes the battery. The battery, from the looks of it, would charge while within the device, but we are sure the driver’d be able to utilise a few spares, and external charger too.
The drone, packed with all this, is about 6 feet long, but it’s also about 5 feet wide. So as much as you might want to slide it in under your arm, that might not work out so well. That said, it is certainly quite small and should fit easily in a small backpack, or a handbag, if you wish to carry it around. This might make it easier to use as well.
This comes after a lot of rumours about Mavik’s replacement being on the horizon and while voices spoke about contradictory successor versions – it could have either been a version with more advanced technology, and better video or a cut-down less expensive alternative.
The idea was that this would be in a fashion similar to their already existing line called Phantom, which typically has 2 or 3 different models for each major version.
From the looks of it, this new drone seems to be DJI’s entrant to the “selfie drone” niche that has been up in the market lately. Selfie drones are low budget, short-range drones designed to be just thrown in the air, to grab some quick photos or video clips and then return.
However, not many of them have been found to be as good as the brochures promise, nor the user’s inherent expectations.
If this new drone is indeed a selfie drone then it will likely be controlled via a smartphone app over WiFi rather than a dedicated controller. The lack of a controller in any of the leaked photos does go with that suggestion.
We don’t know much more about the device for now but we do expect to hear more shortly, though.
Will keep you informed
5G is so new that not many people in the world understand it. Yet there’s so much potential in the technology, that there’s untold merit in discovering and harnessing it, and juicing it for what it’s going to be worth.
And like learning a new subject at school, there’s no better way of playing discovery than having a co-curious mind at work – especially in such adventurous pursuits as new technology!
Swedish telecom major, Ericsson, is going to have some of India’s best minds to pick on – those of the undisputedly brilliant ones from IIT Delhi.
The two institutions have signed a Memorandum of Understanding to collaboratively start up the ‘5G for India’ program, to deploy 5G technology in the country. The other major targets of the program include a fast-track realization of Digital India initiatives and to push the application development for Indian start-ups and industries.
Ericsson is going to be set up a Center of Excellence at IIT Delhi, which will include a first-of-it’s-kind 5G test-bed and incubation centre. Research and development will also be conducted at IIT Delhi with the aim of exploring how mobile technologies could prove to be useful in dealing with certain challenges that the country is facing.
Paolo Colella from Ericsson said,”The program will focus on delivering research, innovation and industrial pilots that use next-generation 5G networks as an enabler. It will help initiate cross-industry research collaborations focused on the integration of ICT in industry processes, products and services“.
Joakim Sorelius, Head of 5G, Network Products, at Ericsson shared how vast the 5G potential was in India in the coming years. “A new report from Ericsson suggests that 5G-enabled digitisation revenues in India will be $25.9 billion by 2026. The Indian operators can generate additional revenues of $13 billion or half of the stated potential if they take up roles beyond connectivity and infrastructure providers and become service enablers and service creators.”
Exciting as it sounds, to be talking about the next level of telecom and communication technology, I need to remind you that there’s a long, long road ahead and a lot may change in there interim.
However, the start-out plan is as follows.
The first round of tests will be conducted in the second half of 2017 where the capabilities of 5G could be tested live by some of India’s telecom operators, ecosystem partners, academia and analysts using Ericsson’s 5G test beds.
Ericsson’s 5G radio test beds would have the responsibility of ensuring uninterrupted connectivity for billions of connected devices, machines aiding consumers, businesses and industrial applications. This would place India on an equal level with other developed countries in the realm of 5G networks and application deployment.
By mid-2018 limited deployment and 5G trials would start and it is expected to be commercially available by 2020.
Why was IIT Delhi chosen for the task?
Professor Ramgopal Rao, Director of IIT Delhi, seems to have a valid answer: “IIT Delhi has been committed to developing the latest technologies in close collaboration with industry. We are glad to be hosting the Ericsson Center of Excellence and Incubation Center, providing a big leap forward for 5G technologies ecosystem development in the country. Our core strength of academic excellence will provide a perfect partnership platform with Ericsson and contribute to India’s Digital vision”.
Lest we forget…
Nokia is also setting up an experience centre in Bengaluru to try to better understand the stakeholder-requirements for 5G in India – which in basic English means that they’re going to be visiting their existing and potential clients to establish each client’s needs and expectations from 5G, and that entire ecosystem – before commencing the solutioning and implementation.
Meanwhile in India Nokia has signed memorandums of understanding (MoUs) with Indian telecom giants Airtel and BSNL.
“Thoughts behind these MoUs would be to introduce 5G here, and what are the steps required for the same, besides identifying applications to define the target segment, which will lead to a complete 5G strategy for telcos,” Sanjay Malik, head of India Market at Nokia, shared with the Economic Times.
Elsewhere in the world…
5G interoperability and over-the-air field trials are being conducted in the U.K. where Ericsson, Qualcomm and Vodafone are testing 5G system solutions and devices to ensure that the new technology can be used on a wide range of bandwidths (existing bandwidths as well as sub-6 GHz).
One of the advantages of 5G is that it harbours increased network capacity which in turn, allows for a higher density of mobile broadband users and to effectively supports upcoming technologies like Virtual Reality, Augmented Reality, connecting Internet of Things (IoT) devices and our now-critical Cloud services.
5G should make its mark in the public safety, manufacturing, energy and automation sectors and our transition to 5G would undoubtedly enable superior internet capabilities with higher security and reliability along with lower latency.
So, it’s just a few years to go for us to be seamlessly connected to our devices from all corners of the globe!
India's Getting It's First 1 Gbps Broadband Service!
“India” is everywhere, especially with the ‘push India forward’ intent of the current government.
Say what you may about the political landscape these days, but you’d still agree – never has India seen an emphasis on Make In India, or “do for India” as of the last 3 years. MNCs, Indian brands, Indian bureaucracy, Indian infrastructure establishments – every one of them is engaged in some way, to push India over the crest.
One of the biggest movements is the goal to digitise India. Indian Telcos – large and small, are all bending their backs to connect India and empower Indians; as are global behemoths like Facebook and Google.
ACT Fibernet (Atria Convergence Technologies) is one of our homegrown providers that is working to connect India too. And, it’s just taken a giant leap forward!
ACT just announced their plans to launch wired broadband internet services with speeds of 1 Gbps throughout Hyderabad. It is for the first time in India that an entire city would be “giga enabled”.
Once implemented, Hyderabad’s populace would enjoy 400 times the national average speed (2.5 Mbps)! With that, Hyderabad would be able to include its name in the vaunted list of world cities that have 1 Gbps (or faster connections) available to all it’s citizens.
That’s not all, ACT says that it plans to similarly enable ten other major cities as well, in due course.
Wondering what 1 Gbps could actually do?
How’s this – It will allow you to download from the internet faster than you’d be able to transfer data from a USB drive to your computer!
It’s fair to say that Infotainment is about to hit an all together new level as you’d be able to upload and download huge files in a matter of seconds, simultaneously. This means that finally, your huge files and device backups could now be uploaded to the Cloud in just a few seconds, all the while downloading HD movies from Netflix at the same time.
Data limits and tension about speed would be a matter of the past.
Bala Malladi, CEO of ACT Fibernet says that a 1TB Fair Usage Policy (throttling of the maniacal speed, down to something significantly lesser), would apply. He also cautions that while 1 TB might appear huge (considering that most connections today are capped at upto about 30 GB, the max being 200 GB), but it won’t take much time if you have a bunch of teenagers ready to swallow it up!
But there’s definitely a lot to be had, nonetheless. As Malladi further underlines, the usefulness of the plan will help organisations and startups who will, “have a cost-effective high-speed internet access will change the face of digital India“.
Read in isolation, this may be interpreted such that a lone user won’t get much benefit out of the plan, but that’s not really the case.
Also, one would require an appropriate Wi-FI router which can deliver the gigabyte speeds.
But then, that’s a small ingredient to get onto the superhighway.
With this announcement ACT has also placed itself ahead of Reliance, who claim to provide 1 Gbps connections too – but only in one locality of Mumbai.
You might be surprised to know that there are providers who provide even faster speeds than ACT’s proffer, in India! MTNL and Hayai both provide 10 Gbps plans – but only in a few places in Mumbai.
In it’s city wide implementation approach, ACT has dealt a body blow to almost every Telco in India.
Malladi found Hyderabad to be apt for the “maiden launch” as,”it has one of the best technological brands, educational institutions and vibrant economy”.
This plan is a colossal step forward to achieve the target “to connect all the 23 million citizens of Telangna by 2018”, Malladi said.
Wallowing over the fact that your city was not chosen for the plan? Fret not, it’ll happen soon – whether ACT lands there first, or another Telco does – fact is ACT has acted, and set the cat amongst the pigeons!
Meet Elon Musk's Next Brainchild, Neuralink - It Will Plug AI Into Your Brain!
After rockets and self-driving, mind-bogglingly fast cars, Elon Musk is all set to hack your brain now!
The Tesla-famed Musk just launched a brain-computer interface venture called Neuralink.
The company, which for now is only in the early stages of even coming into existence, is centered around creating devices that can be implanted in the human brain.
But why would someone want to do that?!
Well, like we said, to hack your brain.
The ultimate purpose of this, they say, is to help human beings merge with software and keep pace with advancements in artificial intelligence. The initiative is supposed to improve a person’s memory and allow for more direct interfacing with computing devices.
While little is known about the company for now, the end goal seems to be to allow humans to seamlessly communicate with technology without the need for an actual, physical interface.
Registered in California, in July last year, the company’s initial focus was to use their proprietary interface to help identify and alert users to the symptoms of chronic conditions – from epilepsy to depression. Now, however, it seems to be on a supposedly greater path.
Well, the news, even though it is certainly big, does not entirely comes as a shocker. Musk has over the time hinted at the existence of his plans to launch a venture of the kind. “Over time I think we will probably see a closer merger of biological intelligence and digital intelligence”, Musk said to a crowd in Dubai, later adding “it’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output“.
What Musk is striving for with Neuralink, only exists in science fiction today. The readers of Sci-fi would know this as “neural lace” which is shorthand for a brain-computer interface humans could use to improve themselves.
But that, for now, is far from any technology we have.
Yes, there are supposedly “cool” applications of similar technology in the real world today. For example, documentary-maker Rob Spence who replaced one of his own eyes with a video camera in 2008, amputees who use prosthetics that connect to their own nerves and are controlled by electrical signals from the brain, and implants that are helping tetraplegics regain independence through the BrainGate project, things are moving in that direction anyway.
Electrode arrays and other implants have been used to help ameliorate the effects of Parkinson’s, epilepsy and other neurodegenerative diseases, in the realm of medicine, but it is still quite certainly a very controversial process. Given how incredibly dangerous and invasive it is to operate on the human brain, only those who have exhausted every other medical option choose to undergo such surgery as a last resort.
So getting implants in your brain for something of this “whimsical” use is going to be a far-fetched thing by all means.
But this has not stopped the Silicon Valley’s interest in the field. One such firm is Kernel, a startup created by Braintree co-founder Bryan Johnson.
Kernel is also trying to enhance human cognition. It’s growing team of neuroscientists and software engineers are working towards reversing the effects of neurodegenerative diseases and, eventually, making our brains faster and smarter, but of course more wired.
“We know if we put a chip in the brain and release electrical signals, that we can ameliorate symptoms of Parkinson’s”, Johnson said in an interview last year. “This has been done for spinal cord pain, obesity, anorexia… what hasn’t been done is the reading and writing of neural code”.
Johnson says Kernel’s goal is to “work with the brain the same way we work with other complex biological systems like biology and genetics”.
Kernal and Johnson have been quite upfront about the years of medical research that is still ahead of them, though. Now that Musk has founded Neuralink, we are hoping for a similar attitude on his part as well.
With this company, however, Musk is doing what he does best – tapping into an incredibly timely and topical technology that is already being worked on by researchers across the globe, but in his own unique and business-savvy way.
When he embarked on space exploration with SpaceX, it was quite certainly not the first private space company. He took what was already being done, and set out with a plan to create affordable, reusable rockets, before scaling up to Mars missions.
With Neuralink, he seems to be doing the same – cracking the more seemingly realistic and profitable challenge of symptom control, before venturing into total man-machine brain mergers.
The traditional SIM card has been dying a slow death over the last few years.
The normal SIM cards (now called “Macro SIMs”) that seem from prehistoric times now, set the ball rolling for mobile telephony. Then we got to micro SIM cards as smartphones arrived. As devices became bigger, the real estate within them became even more at a premium, hence smartphone manufacturers hit upon nano SIM cards.
Now, it’s time for better the technology in an even smaller card – called the e-SIM.
In light of this changing environment, the GSMA (who represents carriers and mobile companies around the world), has announced the specifications for e-SIMs, that are expected to be used in smart watches, fitness trackers, and even tablets. These SIMs will allow users the freedom to activate the SIM embedded in those devices on any carrier of their choice, as well as bring in the convenience of switching carriers and devices without swapping SIMs.
If everything works out as planned, the team behind the development of e-SIM suggests the new technology will be rolled out by 2018.
For now, the leaders of the smartphone industry are in talks with American and British mobile carriers with the intent of making e-SIMs a reality in those regions.
Apparently, conversations are already on with AT&T, T-Mobile, Deutsche Telekom, Vodafone, Orange, Etisalat, Hutchison Whampoa and Teleónica – which are some of the biggest around the world.
The GSMA plans to roll out a similar standard for smartphones themselves in June, at which point the days of the SIM card could be numbered.
Since this specification is also backed by manufacturers such as Apple, Samsung, BlackBerry, LG and Huawei! The freedom and convenience that this welcome ability of switching operators will bring is best understood by device manufacturers – it drives better customer satisfaction and it frees up the manufacturer from having to kowtow to operator-demands. It even helps them move inventory around more seamlessly, instead of suffering the logistical nightmare they currently face – the device is operator agnostic, but since it was packed with a specific-operator’s SIM (at the factory), the manufacturer can’t lift and shift the inventory to other regions/stores/operators at will.
Once implemented, this universal tech will allow users to add mobile devices to a single subscription, in turn allowing them to connect directly to any mobile network. No separate SIM cards, no phone-as-a-middle-man, just an embedded SIM in each device, programmed to connect to a network all by itself!
Not that the GSMA sees it that way. It says “the initiative does not aim to replace all SIM cards in the field, but is instead designed to help users connect multiple devices through the same subscription and will help mobile device manufacturers to develop a new range of smaller, lighter mobile-connected devices that are better suited for wearable technology applications”.
Some Reactions From The Smartphones Industry
“The technology allows an individual to have both, a personal and business number on a single mobile device, with separate billing for voice, data and messaging usage on each number. People can switch between business and personal profiles easily without carrying multiple devices or SIM cards,” BlackBerry India Managing Director, Sunil Lalvani said at as per Tech First Post.
Well, Apple has already explored with its own SIM cards that can swap networks on flights and lets users choose from three different carriers.
For what it’s worth, the first example of a programmable SIM card, is already out there are – notably in Apple’s iPads. But it wasn’t officially recognised by the GSMA.
The GSMA notes there recently announced specifications as “the only common, interoperable and global specification that has the backing of the mobile industry“.
So, the integration of the e-SIM into upcoming iPhones seems like the next logical step for the Cupertino tech giant.
The world’s other smartphone giant also has intentions of using this “programmable” SIM in it’s smartwatch line.
So this isn’t technology that’s a way off-you might be using it yourself by the middle of the year!
With the objective of improving connectivity in remote areas, Indian Railways is committing to set up Wi-Fi hotspot kiosks at 500 railway stations across the country.
This would enable the masses in these possibly remote areas that suffer connectivity problems, to gain access to a range of online services. The initiative has been named, ahem, Railwire Saathi.
Railwire Saathi intends to facilitate the utilization of e-services like e-commerce, online banking, government schemes, open school/university programs, e-ticketing for trains and buses, and the like.
In a country where Mobile Data is the only source of internet access for a majority of the population (outside of the bigger towns and metropolitans areas), an initiative of this kind can be the springboard for change, and perhaps a precedent for more!
The good feeling I’m getting about India is – this is not the first initiative of the kind. A project of equipping 400 stations with free Wi-Fi service, is already under way in association with Google.
The project was announced in September 2015, with support from the Digital India campaign, and went live at the bustling Mumbai Central station in India’s financial capital.
India’s first railway stop to provide free, high-speed public Wi-Fi became an overnight success.
Six months in and Google’s public Wi-Fi was up at 19 stations, supporting 1.5 million users. Google’s next plan is to go for 100 railway stations.
This new venture, however, goes one step ahead by adding the ever-important aspect of employment.
A senior Railway Ministry official communicated the simultaneous goals under the scheme – and I personally love that this comes with an entrepreneurial model, such that unemployed youths, preferably women, can be trained and supported to set up a Wi-Fi hotspot. Not only will this endeavour connect people, it will provide a platform to online services which will make the business sustainable.
Apart from the mentioned purposes, Railwire Saathi hopes to provide services like automated form filling and enabling payments for commodities and utilities like DTH televisions and mobile connections, in these areas with low connectivity.
Despite the existence of many schemes for farmers, minorities and common public, the lack of knowledge about them in remote areas makes the access difficult. A venture of this kind can be critical in being a resource for such kind of information, where people can then go and ask for things, look for things, and get their things done.
The process of setting up a hotspot will begin with a training program under RailTel, the telecom branch and will be approved by the National Skill Development Council (NSDC). The youth can apply to RailTel and receive his or her training.
There’s more. The certificate obtained in consequence can be used for a loan under the Mudra scheme, to be used to set up hotspots according to the design handed over by RailTel.
Railwire Saathi will also help in spreading information about government schemes and other critical factoids that are otherwise lost to the populations in these areas.
So, all in all, not only is Railwire Saathi going to be a technological advancement in bringing the internet to areas that might be struggling with connectivity, the program will also bring employment to a country that has been ridden with unemployment for decades.
In addition, it could also become a center of knowledge and information in areas where the people are perhaps just too lost sometimes.
It would be enriching to see this unfold.
Apple To Introduce A New Ultra Accessory Connector, In Lieu Of The USB Type-C
When a big tree sways whichever way, the earth shakes.
Such is also the way of the world in the devices universe. Industry Pundits are forced to keep their keen ears to the ground, waiting to hear what a certain Cupertino giant is planning to do next, as it would inevitably impact the rest of the world, and not just it’s own industry.
One of the favourite points of contention that the Android hoi polloi have had with Apple of recent years relates to it’s (Apple’s) ‘whimsical’ choices. Be it the 30-pin port that transformed to the Apple-unique Lightning port, or the absence of the 3.5 mm audio jack – all other device and peripheral makers are constantly having to react to Apple’s decisions about it’s own devices.
Even now, when the rest of the world is going the USB Type-C way, Apple’s obstinacy to stick with proprietary port surely might appear to be an arrogant move to some.
Furthering the consternation is Apple’s vacillation – it actually is already a strong proponent of USB Type-C, having campaigned for it with their new MacBook notebooks released in 2015, but for some reason, the company is determined to retain the Lightning for its iPhones and iPads – which creates a lot of confusion for people desiring universal accessories that connect to every object in their devices portfolio.
Before we proceed with the story, I’d like to turn your attention to an article we’d written in June 2016, “Could Apple Be Removing The Headphone Jack To Deliver Better Sound To You?“, and I quote (for those of you who won’t be reading that article:
At Chip-Monks, we believe that Apple might migrate us to the Lightning port for another reason too – to deliver improved sound quality. No one knows or understands the potential of the proprietary Lightning port better than Apple. And they’re going to juice the port and it’s capabilities any which ways it possibly can.
So, while a lot of people would’ve been wishing and hoping that Apple would for once acquiesce to user-prayers, a recent conjecture that Apple is launching a new Ultra Accessory Connector (UAC), it looks like peoples’ dream of a USB Type-C iPhone will forever remain just that.
The Ultra Accessory Connector is actually intended to ameliorate some of the pain that Apple loyalists who possess devices that use USB Type-C and Lightning ports feel, but it’s a definitive answer too.
When the UAC was spoken about initially, many, many people reacted adversely, presuming that Apple was yet again changing the port on the next iPhone/iPad.
The UAC connector is a connector, an adapter that Apple is offering as an olive branch of sorts. It is to be used as an intermediary between the headphone and the device’s port – splitting them in half so that the top part can be universal, and the bottom can be either a Lightning, USB-C, USB-A, or a regular old 3.5 mm analog plug.
The intent is to restore some of the universality of wired headphones – which, until not too long ago, all terminated in a 3.5 mm connector (or 6.35 mm on non-portable hi-fi models designed for at-home listening). With UAC, a headphone manufacturer can issue multiple cable terminations very cheaply, making both the headphones and any integrated electronics, like a digital-to-analog converter or built-in microphone, compatible across devices with different ports.
The reason that this matter raised such a furore is simple – if Apple ever had any intentions to make a switch in its mobile devices to a USB Type-C, it wouldn’t have ever cared to create an exclusive Made For iPhone (MFi) standard for the UAC.
It would have just switched the port!
As I’ve said earlier, it is not that the Lightning port is whimsical a nuisance. In fact it remains a licensed Apple technology and thus the company is legally allowed to capitalize on the sale.
The Lightning conductor is also a bit smaller, and thus fits the aesthetics of the phone where the emphasis is on strict minimalism.
Also, given the ubiquitous nature of USB-C technology, it becomes really hard to regulate the invisible incompatibility in some cases, and that can be downright destructive for your device.
There has also been a question that why headphone makers can’t start making USB Type-C cables with Lightning adapters. Well, as much we rave about the USB Type-C becoming the democratic tech of the future, it is still a long way off from that. That’s especially true compared to Apple’s more than 900 million Lightning-enabled devices already out on the market.
As an accessory maker, you want to sell to the market that already exists first, not the one that is to come.
For Apple, moving to a USB Type-C iPhone would mean a great deal of upheaval, for little payoff. The Cupertino giant has its eyes set on total wireless freedom, and everything -Lightning, USB Type-C, UAC – that it’s working with today are just temporary compromises en route to that goal.
So no, a USB Type-C iPhone was probably never going to happen. But now that we have the UAC to ease the switching between Lightning and USB Type-C music sources, even daydreaming about it seems silly.
Much as I know that this isn’t what users were looking for, yet I know that Apple has it’s reasons for doing things, usually solid ones – no matter what the populace may claim, nor how loudly the Twitterati may chant slogans and epithets.
It sucks when your money-saving techniques knock you down. This is what some people might have felt after LinkedIn redesigned itself and decided to pull some ‘critical’ data from profiles.
Having seen the new design, there are definitely certain advantages from it – like personalized invites, customized news and a significantly better layout.
But when in history, have pros existed without cons?
The Relationships tab has been removed in the new design – which means that the data that people may have added in this particular feature will soon vanish. If you’re worried about the data loss, then login and download the data before it disappears!
The deadline is Friday, March 31st 2017.
If you’re asking yourself what prompted LinkedIn to make this move, let me help.
LinkedIn offered an explanation, saying that they aim to improve users’ experience with the platform, which sometimes means removing features.
Additionally, to make your desktop experience more enriching, LinkedIn is removing the ability to add Notes, Tags and Reminders to your connections – all of which were located in the Relationship Section of your profile.
Feeling a bit cheated? This is the issue with the free tools and services available on the internet. You can use them, but they will never be yours. And that’s not a bad thing, necessarily – remember my comment about pros and cons?
Well, if you’re once of those who has used the Relationship tag often, or uploaded information in such fashion, LinkedIn advises you to upgrade to their Sales Navigator or Recruiter Lite programs – which will allow you to view and transfer your Notes and Tags. Obviously, these are paid features
Pro Tip: Don’t waste your time trying to download the data from LinkedIn’s mobile app – data can only be pulled using a computer.
The download can be done using the following steps:
If Your Toyota Understands You Better, Thank Microsoft!
Microsoft has finally admitted a bit of a defeat and one can see it finally altering its course in the arena of Connected Cars, via its new agreement with Toyota.
As per the agreement, Microsoft has decided to license a batch of its Connected Car patents to Toyota. This marks Microsoft taking a step back after their failed attempt at including Windows in cars from three years ago.
Rather than thinking of this as Microsoft eating humble pie, we at Chip-Monks actually view this as a sign of a changing Microsoft. Under Nadella, Microsoft is being a more thinking company and is not averse to changing course, if it helps Microsoft conquer barriers or set up new bloodlines… successfully.
As part of this joint endeavour, Microsoft’s new auto licensing program would provide Toyota access to navigation, entertainment, voice recognition and gesture controls.
On the Toyota side, this would enable them to take their cars to the next level of being connected – and they are a smart bunch, they are. Car makers do not understand infotainment or connectivity as well as they do automobile mechanics. So, leveraging existing world-class technology and expertise, will get Toyota over the fence faster, and indeed make their cars more integrable, than if they tried creating the infotainment package from scratch.
As far as the specifics of the deal are concerned, neither company is revealing much, including the monetary value of the deal. However, what might be key to Microsoft, may well be the fact that the agreement between the corporations is not exclusive and Microsoft can offer its technology to other automakers as well.
This is a classic Microsoft move which they played the first time back when the company was still a baby; give your tech to one, but make sure the fine print doesn’t stop you from giving it to others. Exclusivity has never been part of Microsoft’s ballgame.
What’s also noteworthy is that this is not the first time that Toyota and Microsoft have teamed up for a project. The companies have been working together on Toyota’s Data Science Center- Microsoft’s cloud computing platform is currently being used by Toyota Connected which in turn, aims to individualise customer experience. The kinship between the two, one can say, should be quite smooth then, and one that both of them will benefit from.
As far as counting the candies for Microsoft is concerned, well, this comes in tow of Microsoft’s attempts to get car companies to use its tech for their connected cars. Microsoft has been trying to swing that one properly for a while now, and one must admit, it has not been doing all that badly.
At present, Renault-Nissan uses Microsoft’s Azure platforms which include critical services like remote vehicle diagnostics. Microsoft is also working with Volvo, who use it’s Holo Lens augmented reality platform to interact with virtual parts.
Microsoft’s endeavours with Renault-Nissan and Volvo have been going much better than the one they had undertaken about three years ago – the futile efforts to emulate and create something along the lines of Apple’s CarPlay system. They’d called it the Windows In The Car concept, and that ambitious project couldn’t really be transformed into anything close to a real entertainment system.
This deal with Toyota, which we can assume would include more and more components of a connected car, would only take Microsoft’s efforts a few steps further.
As to where they are headed is concerned, to be honest, Microsoft, unlike the other Silicon Valley bigwigs Apple and Google, is not a company that would really consider making their own cars. They have always been software oriented, and have revelled in knowing that their software gives life to the world’s most advanced hardware.
That’s what they did with computers, successfully, and phones, unsuccessfully. To make their stand on this clearer, Microsoft executive, Erich Endersen stated, “Microsoft doesn’t make cars. We are working closely with today’s car companies to help them meet customer demands”.
That said, the tech giant is certainly working on increasing its sphere of influence across telematics, infotainment and other related systems in connected cars. Harman revealed not too long ago that it is working to integrate Microsoft’s Office 365 into its infotainment systems.
Nissan and BMW, too, are working on bringing Microsoft’s Cortana personal assistant to their cars. With this new deal, we might see other car companies hopping onto the bandwagon soon, if all goes well.
Amazon Considers Opening Augmented Reality Enabled Stores!
Amazon has never really been a tech megabrand, but more like a business that has developed so many wings, that the scope for exploration is immense.
It is within this scope for exploration that Amazon has decided to venture into new territory – Augmented Reality enabled furniture stores.
Sounds unimaginable, right?!
A friend gasped when I mentioned this to her – “What would it even be about?!“, she asked. Well, hence this article
Amazon’s intent is to enable customers to use AR technology to see objects in the particular (and sometimes peculiar) setups of their homes or offices.
The giant has already taken its online business of books to an actual bookstore in the past, and more recently, has experimented with the idea of a high-tech grocery store. This could be the next step for the ambitious giant.
So, imagine this now: You walk in to a store to buy furniture. Now, you like a chair, and would like to see how it would look against the backdrop of the curtains in your living room. But there’s also another couch that you have your heart on. It could go better, but you are not too sure about how the rounded-off edges would look against the square table.
If only you could transport it back, and see how it looked. Well, Amazon wants to let you do just that, except, with AR.
Thus Amazon’s objective is a canny one – to increase the impact on the buyers, and AR is an extremely innovative way to go about it!
What Amazon’s tech is going to do is to let the customers witness the virtual appearance of the product in the desired place – thus making it easier for the enthralled (and suddenly excited) customers to press the Buy button, there and then.
Now that sounds exciting, doesn’t it?! I know where I’d want to shop for my next home!
But wait, Amazon is not done yet. They’re not only thinking of innovating the furniture market, but also innovating the electronics market.!
The idea of coming up with electronics stores similar to Apple’s has also been doing the rounds. While some of Amazon’s electronic devices are already sold in their book stores, Amazon’s electronics stores are reported to be meant for heavy emphasis on the hardware items of Amazon’s own stockpile – such as it’s Echo speakers and services like it’s Prime Video.
Now that sounds fun!
Amazon’s plans however have not yet been drawn up; the ideas are still making the rounds. But knowing Amazon, we can be quite sure that when they get to it, they are quite certainly going to make the best of it.
Amazon’s idea, also, is not the first of its kind, with IKEA already having walked in this direction four years ago.
It does however, reflect on Amazon’s drive to explore domains beyond the usual.
But exploring beyond the usual also comes with its own drawbacks, the major one of which is that your idea might never even make it to the real world. And that could also happen to Amazon’s current muse.
Even though the idea of an AR furniture store is certainly quite amazing, working such a venture out is bound to have many layers of difficulties. This also means that there is a chance that the said venture does not make it to an actual outcome, depending upon various factors.
That said, we at Chip-Monks certainly hope that Amazon does succeed. That store would be so cool to shop at!
In all of this, what is for certain then, is the intent of Amazon to not restrict itself to a web-based company that sells you things. They are working on innovation as much as the big Silicon Valley guys.
Amazon has surprises awaiting us in the future.
Microsoft’s New Patent Hints At A Communication Device That Can Be Folded in Half
A patent filed back in 2015 by the tech giant Microsoft provides a telling hint of the direction that the company’s phone division might be heading towards, for a new range of mobile products in the possible future.
The patent is for –
The points above clearly shed light upon the key facets of the patent and provides significant indication as to where Microsoft wishes to devote their R&D resources.
However, Microsoft is not alone in following this direction – there already are products that have hit the market like Lenovo’s C-Plus (a bendable phone that can be worn around the wrist like a watch), or are almost ready for launch – Samsung is also scheduled to introduce a new bendable phone in the third quarter, and LG has been known to be working on bendable OLED screens for a while now.
Perhaps, the only credible difference between the intention behind Microsoft’s patent, and what has already been released in the market, is the focus on the “obscurity” of the hinges.
The patent clearly states – “In order to reduce and/or obscure the visibility of a support structure for a display panel, the present disclosure provides example display devices including curved or otherwise bent regions for directing light to a user’s eye when the user’s gaze is directed to a support structure at an edge of the display panel. In this way, when a user is viewing a region occupied by the support panel, the user may instead see light from the display panel showing the displayed objects”.
While it all seems the same, in the world of technology, even the smallest of changes can lead to tectonic shift in the field. But, what is much more important is that the intentions behind the filing of the patent are exercised upon!
The document has been filed by Timothy Large and Steven Bathiche, two Microsoft employees, who have filed other patents as well. However despite being published, no significant progresses have been made on those patents, and in some cases, have been stalled.
Similarly, thousands of patents are filed every year by resourceful companies but are not exercised upon with the same zeal. So, the expectation that rumored device would definitely be worked upon, and that a tangible product would emerge in the near future, is still a question that is begging to be answered.
Go do it, Microsoft – get ahead!
O is for omniscient and O is also for Overhaul.
It’s also going to be the suffix for the next Android version since they follow the alphabet (oops! that became a pun!)
Well, Android O has just begun it’s ascension – making it’s first appearance on March 21, but it just a Developer Preview version at this time. It’s already raised hopes amongst all the Oreo connoisseurs in the world, that it’d be so named (if KitKat’s possible then why not Oreo?)…
Just like the four seasons, the new OS will be divided into four Preview versions: Preview 2 will surface somewhere in May, Preview 3 in June, and the final release in the last quarter of the year.
Since this is Developer Preview it would be wrong to expect anything super stable. It’s more like an expression of intent at this time. It’s being put out there to showcase new features and capabilities, but in their fairly-rudimentary avatars. As time passes, more features will be rolled out and the buggy applications shall be taken care of.
Also, being at it’s infancy stage, this first version won’t be rolling into Android Beta Program for the general public to try out. Google would probably wait to collate the bigger issues and then fix them before the general public gets it’s first taste of the new ‘O’.
The images have been dropped on the official site and only a select few of the rarefied atmosphere of tech skyline are chosen: Nexus 5X, 6P, Player, and the Pixel, Pixel XL, and Pixel C. As for the rest, they can try it via emulators.
For those of us from India, who are still itching to get their hand on this novelty-there are two main sections in the Android : the open world of AOSP, where you cantweak the hell out, and the closed-source “Google” part.In this article, we will talk a bit about both of them.
So, what’s new on the anvil?
Well, by now, any tech-tower gazer worth his salt would know that these are the times of tech saturation. The old has barely left any space for the new. Therefore, expectations of anything earth-shattering from Android O is expecting a bit much. I am not damning the Developer Preview as a run-of-the-mill kind of a thing, for it does have its own new features. Still, they add more flourish to the existing landscape, rather than painting something entirely new.
The batteries are everything but the sun. And they deplete quickly with multiple processes running on the device. This has been one of the major areas of contention for almost every version of Android.
Android O tries to solve this by putting certain limits on what an app can do while it is in the background. The good part? These restrictions are automatic. The focus is to reduce battery drain by reducing activities in three areas: implicit broadcasts, background services, and location updates. And given the impact of the restrictions, developers would probably give this feature a double look, and see how they can improvise given the new straight-and-the-narrow decreed by ‘O’.
How often are you annoyed by multiple notifications about a single event from the many news apps on your phone? Or the constant clamour for updates?
Well, now you can control those notifications by grouping them under selective banners so that you can be more focused. Notifications will still be managed by their respective apps, but you can restrict the cluttering.
Do you prefer your notifications in a circle, as Pixel phones present them? Or in a Samsung-squircel?
With the new “Adaptive icon” feature, developers will be able to choose a background image and the subsequent Android skin can cut it out to fit the system design.
This is for those repetitive annoying pieces of information that you need to type out on your phone time and time again – passwords, email IDs, even your office’s address.
With Android O, you can now choose a source for the ‘autofill’ data, and there will be no need for applications acting as accessibility services when you need to store or retrieve such data.
Picture-in-picture on phones and tablets
Multi-window mode and split screen are passé now. Currently, if you have an Android TV, then it would be Picture in Picture (PiP) display enabled, but you shall be using a standard multi-window view versus an overlay.
Not anymore! Now with the new Android O, supplementary view is strictly for content and the controls and other peripherals can be showed on your handset. This will surely enrich the viewing experience!
Hear hear, the new changes to the system are:
High-quality Bluetooth audio which will be provided by the Sony LDAC codes – the drums on your bluetooth speaker will be exact and rich now.
NAN (Neighborhood Aware Networking) connectivity – previously known as Wi-Fi Aware, this will allow you to discover and communicate to a Wi-Fi network without an internet access point.
This, coupled with the Telecom Framework feature which will probably elbow out the universal system Phone app, giving more room to third-party calling apps, and render better Bluetooth controls over data display.
Android O tweaks the arrow and tab keys’ navigation and makes them better, allowing for easier integration with the system apps and scales the overall user experience to a better place.
The idea is to solve the problem of having fewer features for a hardware keyboard, such as autocorrect. Google will surely evolve this feature as per Developers’ feedback.
This feature was earlier a part of the Chrome APK, but now it will be embedded in the system as default. “Crash handling” for the Developers will be much easier now, and the applications that use web development languages will now be stable and secure – provided the Developer enables Google Safe Browsing for remote URLs.
Java language API support
In order to help Developers design apps with better performance and ability, Android has enabled support for the new Java API, coupled with optimisation for the new runtimes.
Font resources in XML
One of the first things that users tend to tweak and personalise on our phones, are the fonts. Android has decided to encourage this by making fonts a “Fully Supported Resource Type” in Android O.
This will allow developers to develop new fonts for their respective apps.
Autofill APIs for Pro Audio
This feature is built for the applications that require a high-performance and low-latency audio path. The audio data shall be read and processed by normal streams, and the routing and latency will be handled by Audio API.
Given that this is a Developer Preview, no one expects a flawless package. Given it is a new babe in the woods, the Developer Version is made to see what falters and to improve on those bugs. Changes will be made, new features will be added and subtracted as per feedback.
While we don’t recommend that any non-developers try it out yet, but when you do finally see it at work on the device in your palm, just remember that a lot of hard work when into readying it for you before you got it. And changes don’t have to revolutionary every single time – sometimes they can be pleasantly silent.
Apple's Patent Indicates Touch ID Would Be Built Into Displays
Back in February 2017, Apple was granted a patent for their schematic technology that might enable the tech giant to build a fingerprint sensor and scanner directly into the glass display on its devices and other products.
Clearly, this would revolutionise the biometric use as the world knows it. Not only could Apple include this in the display of the upcoming iPhone, but also in any other device in it’s immense product portfolio – iPads, Apple Watch, MacBooks (with or without the Touch Bar) forthcoming wearables, even Apple TV remotes – well almost anything it wants to. So passwords could well be a relic of the past.
Acquired in the U.S. and published recently by the U.S. Patent and Trademark office, the patent details an interactive display with IR diodes, which would enable the device (let’s call it the iPhone) to have a fingerprint sensor embedded in the glass of the front display, thus enabling Apple to replace the one that is embedded in their physical home button.
The patent was originally filed for by the micro-LED display company LuxVue in 2014, but Apple acquired the patent when they acquired the company.
The patent’s description reads: “When the fingerprint is placed upon the transparent substrate, the sensing IR diodes within the display panel sense patterned IR light reflected off grooves of the fingerprint surface. This patterned IR light is relayed to the output processor as a bitmap where it is processed to determine the fingerprint surface’s unique pattern. Because the display panel can sense IR light, the display panel is able to perform surface profile determination when the display panel is not emitting visible light”.
The patent would enable Apple to radically redesign their iPhone. For starters, the home button on the front could go, because it really does not have much of a reason anymore to stay; Apple phones anyway have a virtual home button – they’d just move that leverage that “soft” button and marry it to their existing 3D Touch (pressure-sensitive capacitive touch) technology.
The patent would save space on the phone, granting greater design flexibility and the broad bezel (the forehead and the chin) on the front might not be a necessary design choice anymore.
Most importantly, this could suddenly provide Apple the means to build in an edge-to-edge display, which would once again be a landmark design element that will set iPhones apart in the crowded marketplace.
Apart from that, let us be honest here, it would just make the iPhone a lot cooler!
This is not the first time that technology of this kind has been talked out. In fact, the truth be told, having a fingerprint sensor in the display is one of the most obvious and expected innovations in the tech world today; something that everyone knows is coming, but none have been able to achieve. There are numerous technical challenges that have prevented almost all of the lesser players from getting there first.
But not all.
If Apple does make use of the patent in their next phone, they won’t be the first ones to do so.
Back in 2016, Xiaomi released two flagships – the Mi 5s and Mi 5s Plus that had an under-glass ultrasonic fingerprint reader on the front. The phones have only been available in the Chinese market hence you mayn’t have heard of them.
The ironical part is, even if the Mi 5s or the Mi 5s Plus were available in the international market, the West would’ve mostly been deprived of them because Xiaomi does not (and currently, can not) sell phones in the Western markets due to patent infringement risks.
Apple’s patent was reportedly first filed for in June 2014 and credits Kapil V. Sakariya and Tore Nauta as its inventors.
Apple has always had this sweet vision to pace their hardware development ahead of the market (most times), with a view of delivering superlative user experiences – often things users didn’t even know they needed.
From the moment Steve Jobs introduced the first iPhone back in 2007, Apple has constantly kept users hooked and competitors at bay through such innovations and breakthroughs.
That said, we must mention that Apple’s patent is only a patent so far. Even though they are celebrating the 10th anniversary this year, which means that the world is expecting something radical and big from them, we might not necessarily see the usage of the tech in their next phone.
I understand if your heart sinks as I say this. But there’s still going to be a lot of good stuff on the 2017 iPhone – amongst other things, we are expecting wireless charging, a lovely, reworked steel and glass body and a curvy edge display(s).
Yet, I keep hoping, they’re able to whip up the secret sauce of the in-screen fingerprint scanner – I really, really want to see an “infinity” display on an iPhone!
Ever given a thought to what our lives would be like without any benchmarks? Hard to imagine, isn’t it? Everything that we do is measured by a set of standards – whether it’s our clothes, phones, laptops, or exam results, they are always up against certain expectations.
And in this world that’s brimming over with technology, in every facet of our day to day life, standards are a must-have.
GFXBench is one such benchmark that measures the performance of devices. A lot of companies run their upcoming devices through the tests to see how they rack up against their competition.
And a new Asus tablet has recently been noticed on their database.
According to the specifications seen on GFXBench, the tablet is 9.6 inches machine which boasts of a 2048×1536 resolution. The MediaTek MT8173 SoC CPU is complimented by a PowerVR GX6250 graphics chip from Imagination Technologies (the same folks who make the GPU for iPhones and iPads), supplemented with 4 GB of RAM.
Storage is a decent 64 GB and the cameras are fairly decent too – 7 megapixels at the rear, and 4.7 megapixels at the front.
Based on these specs and the fact that the new tablet operates on Android 7.0 Nougat, this tablet would probably be categorized as an upper mid-range product.
It sure is awesome, but don’t form an opinion just yet – because there’s some confusion around the processor that the tablet runs on – the GFXBench lists the MediaTek MT8173 as a dodeca-core processor when in fact other online resources indicate that it is a quad core processor.
The difference between the two is that a quad core chip has four different units for executing various processes, whereas a dodeca processor has twelve. Performance wise, a dodeca core chip is obviously better. But the website seems to have made a mistake, and now we’re left wondering which processor is actually being used.
Whether it turns out to be a dodeca core or a quad core, it has definitely given an aura of mystery to this new device. Too bad we’ll just have to wait and see.
I may be raining on your parade here, but all this excitement might be a little premature. Device specs are often changed before launch, and even then there is no guarantee that it will actually make the cut.
Regardless, you just rest easy – whether it launches or not, Chip-Monks will definitely keep you updated.
What if you could use your iPhone or iPad as a laptop? Sound like a good idea to you? Well, Apple seems to agree.
While others have done this already, it’s come to light that Apple had apparently filed a patent for a hardware accessory that can transform your humble smart device, into a futuristic full-function computer.
Recently, the U.S. Patent Application Publication published an approved design proposed by Apple that was actually filed last year in September.
This patent that proposes an “electronic accessory device” that is like a thin portable dock, much like a laptop in form factor. This accessory carries all the required components to transform your iPhone or iPad into a full-blown computing device.
The intent behind this creation seems to be to lend a modular approach to computing, whereby your iPhone or iPad would breathe life into an otherwise lifeless setup.
“The present application describes various embodiments of systems and methods for providing internal components for portable computing devices having a thin profile. More particularly, the present application describes an electronic accessory device available to extend and expand usefulness of a portable computing device,” the patent’s description reads.
Apple’s patent also specifically mentions aluminum as an “ideal enclosure material”, hinting that the accessory could well be something that keeps with the MacBook’s very lithe appearance and form.
First off, let’s just call it a ‘dock’, for ease of reference.
Well, the dock is not a dumb piece of aluminium. There’s actually a full-form touchscreen display, a full-size keyboard and internal components like the all-important graphics processor, onboard solid-state storage and a large battery all built into the passive dock.
All these will come together when paired with an iPhone or iPad which actually functions as the missing piece of the jigsaw puzzle – the central processing unit a.k.a. Processor!
If you’re wondering what this setup reminds you of – let me help you.
Well, it resembles HP’s LapDock’s, Asus’ PadFone and most recently, Microsoft’s Surface Book.
So, it seems like Apple’s taken Asus’ idea of using a phone to become a laptop, and seated the graphics processor in the “base” or the dock (as I’m referring to it in this article) – much in the same manner that the Surface Book mates it’s removable screen with the base that contains the GPU, the keyboard and a big battery.
“It is anticipated that the accessory device is not a stand-alone computing device but only acts in concert with a host device”, the patent says. “The host device can be a portable computing device, such as a smart phone, media player, tablet computer, or other portable computing device”.
The iOS-powered accessory would be able to establish communication with the host device via physical connectivity through either a Lightning port or (more believably) smart connector.
Wireless communication points like Wi-Fi, Bluetooth, or LTE don’t emerge as front runners for regular connectivity for the back-and-forth of information and the unnecessary pressure to transmit graphics etc. over the air. That said, all three may exist, and be used for their respective original uses.
To simplify the technicalities and jargon of the patent for you, it seems that there are two kinds of configurations.
In the first configuration, its iPhone that assumes the place of the trackpad and can be placed in the slot where the trackpad is found usually in laptops. Once the iPhone sits in the trackpad slot it performs two functions – that of powering the “laptop” as well as the mouse trackpad, facilitated by the full-size physical keyboard which part of the dock. Of course, you’ll need a large monitor to make your PowerPoint presentations on.
The second configuration as figured out from the patent relates to the coupling of an iPad to a base-only dock.
The iPad plays the double role of powering the dock and also assuming the role of the display screen. The iPad in this setup works as a pointing tool too, leveraging the dock’s physical keyboard, for the true “computing” feel.
What I’m most excited about is that the accessory comes with an added bonus of being able to supply a long missing feature of a pointing tool on an iPad or iPhone!
While it is all natural to be excited for this new development, it would be unrealistic and naïve to assume that this patent would definitely build into a commercially available mainstream product by the Cupertino tech giant.
Why? Well, because this how the world of Research and Development works. There is a possibility that this doesn’t make it to the market but remains useful only on the experiment end. To me, that would be a waste – considering the success that the Surface Book is already enjoying, and with a lot of consumers yearning for a hybrid machine to lessen their baggage-weight – such a machine would definitely be a welcome option – especially if it does enables the user without compromising on her computing needs
If you are in the business of finding solutions, you’d know that sometimes the most improbable is the most plausible!
Take the example of batteries – researchers have long been trying to find a more efficient solution to the Lithium-Ion battery. The lithium metal in the battery is quite powerful, but not entirely safe. A little roughing up will send the battery up in flames.
Case in point, the recent case of a pair of headphones that exploded mid-air. And of course, there’s the entire epic drama of the Samsung Galaxy Note7. Which is why the IATA has actually started disallowing the carrying of equipment that includes Lithium-Ion batteries, by logistics carriers.
The situation becomes worse when one considers the fact that there’s no real viable alternative for fuel-cell batteries, at this time. Rechargeable solutions haven’t yet reached the point of replacing fuel-cells.
In terms of automotive power delivery too, rechargeable batteries are far from gaining public acceptance. Petrol and diesel continue to be the dominant energy currencies globally.
Big players like Tesla Energy are trying to offer renewable solutions, but they are still quite a distance from being ubiquitous. Thankfully, they’re powering ahead, trying to find better ways and materials.
In March, Tesla turned on the switch for a huge solar energy farm at the Hawaiian island of Kawai. Here, Tesla’s installed industrial-grade battery packs for energy storage so as to create and run a parallel power grid system. The project is probably the biggest battery-backed power plant on the planet, studded with 54,978 panels and 272 Tesla Powerpacks – which provide 52 MWh of energy storage and 13 Megawatts of solar energy for the grid.
In southern California, almost 300 Tesla batteries are installed to suck up energy from the grid and feed it back when needed.
Such energy alternatives might seem novel and pleasantly disruptive at first glance, but reality is a bit far from it.
Existing battery technologies have almost reached their saturation point (more about this in a moment), and any more tweaks to the battery tech will only make it more expensive and bulky. Constant charging and recharging is also a problem.
Today’s Lithium-Ion batteries use liquid electrolytes to transport the Li-Ions from the negative side of the battery (anode) and the positive side of the battery (cathode). If batteries like these are charged in quick succession, thin metal whiskers called dendrites can form across the liquid, causing a short circuit and eventual explosion.
As compared to other forms of transportation fuel, electricity is the most efficient- 10 kWh in terms of electricity is equal to 40 miles per 3.785 liters of fuel, yet it cost-wise it is very cheap.
Provided it came from a non-fossil fuel source, electricity is also the cleanest transportation fuel available. It has it’s own share of problems though.
First, the constant need to charge the batteries regularly.
Second, unlike utility-scale batteries, where size doesn’t matter, batteries that are supposed to be used in transportation have the added baggage of portability.
Back then, to the problems associated with Lithium-Ion itself.
Enter John Goodenough, the inventor of the Lithium-Ion battery and also of the RAM (Random Access Memory) used in electronic devices.
Goodenough, at the age of 94, came back to save the day, exhibiting the design of a rather viable alternative to the Li-Ion setup.
Unlike the Li-Ion battery, his new design uses a solid glass electrode instead of a liquid one, sodium instead of lithium, and possesses three times the energy density as compared to Lithium-Ion batteries.
Since it has solid state electrolytes, this new battery can operate at temperatures where a normal Li-Ion battery can’t.
There are other benefits too. Lithium is an expensive metal, that comes mainly from the mining of South America and China, and from the United States brines. The use of Sodium instead of Lithium makes the battery earth-friendly and cheaper – as sodium is plentiful in seawater.
But this design has also baffled his peers in the scientific community. The physics of the matter states that in order to produce electricity, different materials must produce individual electrochemical reactions in the two opposing electrodes. The difference between the two is what produces voltage, storing energy in the process.
But since Goodenough’s battery has solid metallic electrodes, the voltage should theoretically be zero.
Additionally, the paper in which the discoveries have been published does not adequately explain the cause of the prophesied three-fold increase in energy. According to Venkat Viswanathan, a professor at Carnegie Mellon University in Pittsburgh, Pennsylvania, “He’s technically made a perpetual motion machine”.
Despite the uncertainties, one can be pretty sure that the path of progress will be filled with surprises and controversies. And in the words of Eileen Carey, “One must not be afraid to be controversial… they put themselves in a better position to be on the right side of history”.
The discovery by Goodenough is a clear reflection of this maxim, and if humanity is to be well served, then we need to consider this possible alternative seriously.
PS: Chip-Monks has written many, many articles about the several different research, discoveries and experiments that are currently happening in the world of energy-generation and storage. Do take the time to read our posts, you’ll be as intrigued as we are. Promise.
Samsung's Own Virtual Assistant, Bixby, To Power The S8. Should Siri Be Worried?
Apple had kicked off the Virtual Assistant craze through the launch of Siri (built into iOS 5) back in 2011.
Given it’s headstart, Siri ruled the space for quite a long time, standing alone as the Alpha.
But over time, Siri gained a lot of company – Microsoft’s Cortana, Amazon’s Alexa, and Google’s Assistant – all joined in the fray of Artificial Intelligence based assistants.
Soon, this clique of assistants will be joined by Samsung’s own Bixby.
Those of us who follow Tech, have known Bixby was coming for a while now, through rumours and industry-trend-watchers.
Finally, Samsung confirmed recently that an early version of Bixby will launch alongside it’s next flagships – the Samsung Galaxy S8 duo.
Samsung is expected to officially unveil the phones at an event on March 29th and it should start to ship in April. The phones will have Bixby software baked in, along with a rumoured dedicated button on the side of the device that would let you activate the service when you don’t want to use a voice command.
Of course, the Galaxy S8’s will also be running the Google Android software, which means that you can use Google’s voice services too. But Samsung is hoping Bixby stands out thanks to its integration with Samsung products and a few additional features including the ability to give you full control over apps that support Bixby.
Once an app is Bixby-enabled, Samsung says anything you could do with touch commands can also be accomplished by voice too! That’s a big promise.
That hyperbole aside, Samsung itself says that when the Galaxy S8 launches, only a “subset of preinstalled applications will be Bixby-enabled”, but the company expects the list to grow significantly in a short span of time as Samsung intends to release a Software Development Kit to help third-party developers add Bixby support to their software.
It is the kind of a giant move that would probably be doomed to failure if it were coming from a company other than Samsung – why would Android app developers add support for a virtual assistant from HTC, Sony, or LG when they could just tap into Google Assistant to enable voice support across a wider range of current and upcoming Android devices?
But Samsung is currently the world’s top smartphone maker, so the company might actually have the clout to pull this off.
On the other hand, Samsung’s past attempts to run its own app store, video store, and other alternative-to-Google features haven’t always been successes.
I personally think Bixby could go either way as a virtual assistance, and an Artificial Intelligence based tool, even more so, just has to do a lot to stay relevant and in the customer’s mind. It’s a well known fact that not only is app fatigue a reality now (where customers are using lesser and lesser apps), but with the passage of time, people are even using the newer features included in each OS upgrade.
Truth be told, virtual assistants haven’t yet found much uptake by smartphone users across the globe, so if Bixby doesn’t light the world, it may not be it’s fault entirely. Assistants still have to find their place in the world. Just like Siri itself is struggling to.
Google started to roll out the second beta of Android 7.1.2 for Pixel and Nexus users a couple of days ago.
The recent release fixes some bugs from the initial version released in January, and also offers the latest security patch from March.
This Beta is expected to soon be made available to public for Pixel and Nexus smartphone users brings a number of bug fixes and performance optimisations, bringing the devices up to build NPG47I.
According to reports, the second version of the update is bringing in the highly requested “swipe-down-for-notifications” shortcut for Nexus 6P, which has been available to Nexus 5X users since the first beta.
The latest release also brings massive changes to Pixel C, finally bringing it up closer to the Pixel lineup. From the new white navigation buttons to the updated settings menu, Google has made a number of changes to Pixel C’s user interface.
The tablet had also received a new multi-tasking menu.
Google is yet to officially post the details or factory images for the latest update; we will include them here as soon as they are made available.
Android 7.1.2 is expected to be made available to the public in the first week of April. Google had announced last year that it would be following a regular maintenance and managing schedule with the launch of Android Nougat, however, it did take Google longer than expected to release the second beta.
We hope it was worth the additional wait
You Could Soon Edit Or Delete WhatsApp Messages You've Already Sent.
WhatsApp, the world’s most popular messaging app may finally be rolling out two new features that most users have been quite eagerly waiting for ever.
A new version of WhatsApp (release date currently unknown) would allow users to edit and delete messages that they’ve already sent. Given the fast paced conversations that go around on WhatsApp, irrespective of you may be standing or how mind-crazed you may be at the time – hence typos and often, missteps are a common sight on chat transcripts.
For long, folks have wanted to be able to pull back the proverbial spoken word on WhatsApp but not been able to, since the App never allowed for such latitude.
Now, it would!
The news broke out when WABetaInfo, a trusted source of early WhatsApp news and speculation, put up a video showing the message deletion process in action. It claims to have discovered the feature in the WhatsApp 220.127.116.119 beta build for iOS.
Per the video, this new feature works much the same way as “forward” or “copy” do on WhatsApp – via the contextual menu that pops up when you tap and hold on a sent message. One of the (new) options that come up is “Revoke”, which can be selected to delete a sent message from the conversation.
A subsequent tweet by WABetaInfo showcased the Edit option in the same context, where beside the “Revoke” option would be an option to edit a text that has already been sent.
Currently, editing sent messages is not something that WhatsApp allows its users to do. As far as deleting a sent message is concerned, sure, one can go ahead and delete a sent message, but it is only deleted at a device level, and the recipient on the other end will see it regardless of the deletion at the originator’s side, since the deletion does not so far work at a server level.
As per the video, this seems to be applicable also to messages that have been read, since the trademark blue ticks can be seen on certain texts before they are deleted.
The speculation on that is high since an obvious assumption, to begin with, was that it would only be applicable to messages that have not been read. While various trusted sources vary on their opinions in this regard, a concrete speculation can’t be made yet.
Another feature that is rumoured to be under testing is called Live Location Tracking. This feature lets users show their movements to other users within a group chat.
Users can choose to share their moving position for a limited time of one, two or five minutes.
This feature seems to have built on the quite popular share location feature that WhatsApp already has, and seems to be aimed at enabling users to find one another easier in a crowded environment.
WhatsApp had been quiet for quite a few years, with minimal feature-additions (especially after the Facebook acquisition). However over the last year an half or so, WhatsApp has become quite active and is adding new features and optimisations quite regularly. The iOS version of WhatsApp recently received an update that carried many welcome changes for the users – including an increase in the number of photos and videos that can be shared in one go, (bringing it up to 30), as well as a useful “Storage Usage” screen, and the ability to queue messages, the latter of which has been available to Android users for a while now.
These three new feature, for now, are not available to the public at large, as they’re rumoured to be under testing phases on both, the iOS and Android version of the app; regular users cannot access it right now.
Facebook run WhatsApp has however not confirmed the news, or anything else in this regard, yet. WABetaInfo speculation is pertaining to the Apple exclusive iOS build, but if it does come into existence, it can be assumed that it would be brought on to the Android version of the messaging app as well.
This time last year, Facebook poached Regina Dugan, an Advanced Technology stalwart from Google. Eyebrows were raised at this left-field hire.
Then, other news Facebook putting together an entire team and setting up what was called Facebook Building 8, had everyone befuddled. Confusion about what such high-end experts from the hardware research and development sector were going to do at Facebook, ran into several hundreds of barrels of print ink.
Shortly, things fell into place and it started becoming apparent that Facebook was going the Google and Microsoft way – using the gains from it’s primary business – digital media, to feed into it’s hardware research and development enterprise to fester and create supporting platforms that may at some point become their own lynchpins.
Building 8 was thus, the site for what could certainly be expected to noteworthy hardware advancements, a mecca of innovation and Facebook’s hotbed of hardware and next-gen ideas.
The initial questions, out of curiosity, were of course, many – What exactly was Facebook going to make a Building 8? When would we see any actual results?
Well, the ear-to-the-ground pipeline now has some details for us.
From what it looks like, Building 8 is quite similar to Google’s Advanced Technology and Projects Group, or ATAP. It is also not quite different from Google X, the lab where Google’s self-driving cars were born.
Even though Building 8 is hardly a year old, it seems like there might already be ready to show the world some teasers of what they have been up to so far. Word is that Building 8 is working on four advanced technology projects, each of which will play an important part in F8 – Facebook’s annual global developer conference coming up in April.
These projects reported span everything from cameras and augmented reality, to science fiction-like brain scanning technology.
Recent developments have suggested that one of these four projects involves cameras and augmented reality. Given that Facebook has been quite publicly and actively working on VR, this would not be a far-fetched move at all.
Another project is expected to revolve around drones – something that rival Snapchat has also been noticed to be experimenting with, not too long ago.
This supposition arise from Facebook’s hiring of Frank Dellaert, a robotics and computer vision expert, who was the chief scientist at Skydio, a small startup that is working on a yet-unreleased drone that can autonomously track a person while navigating through physical space.
Another project might involve brain scanning technology, or so goes the word. The hiring of a former John Hopkins neuroscientist who helped develop a mind-controlled prosthetic arm suggest towards something of the kind being experimented with at Facebook.
One of their projects might have medical applications – or so suggests Facebook’s hiring of an interventional cardiologist from Stanford, with expertise in early-stage medical device development.
The word also is that Building 8 might be developing a fifth unspecified project, and they are currently looking for the right person to lead such a project.
Amongst other noteworthy people who have recently joined Building 8 are Skydio’s former head of hardware, Stephen McClure, and Alex Granieri, who previously worked on Aquila, Facebook’s high-altitude drone designed to beam internet connectivity to the developing world.
What we find really intriguing is that all the project leaders within Building 8 get to work like mini-CEOs, such that they are assigned a timeline, and an idea to develop. Work apparently happens in a manner that these inventions/creations can either be shipped and sold as standalone products, or be spun out into a different part of Facebook.
Facebook’s interest outside of the digital media platform has been imminent for a while now.
We are all by now familiar with the internet.org efforts that the Silicon Valley giant has been making, to take the internet far and wide. Their efforts in this respect are comparable to that of Google, with the Google Loon project. They have also been working with VR lately, having brought on Oculus.
So, it’s now easy to understand that Building 8 is more like an addition to already existing efforts on Facebook’s part to expand into a varied amalgam of tech-related innovations.
The move to hardware is of course a fairly risky one for Facebook to make – a company that otherwise reigns as an internet giant, with its close-to-2-billion user base, and numerous products. What it also needs to be careful of is that it is taking on deep-pocketed competitors like Apple, Google, and upstarts such as Snap, in a cut-throat business defined by thin profit margins and complex logistics.
It would be interesting to see how it goes for them, and what Facebook brings to the table, and if any of these skunk-works actually are able to make a mark in their respective salvos.
You’d agree, audio/music from headsets, even the most expensive ones you’ve ever used, sounds flat. As flat as a piece of paper; when compared against what it sounds like in real life for you.
Let’s take an example, if you’re in an airplane, eyes closed and at peace, you’d hear the drone of the engine from one ear, the crackling of a wrapper from the other, the turn of a page, the step of someone in the aisle, and even the air coming from the overhead vent (despite it’s very subtle hiss). You may possibly even hear someone just smoothening his shirt over his belly as he leans back for a nap!
Both your ears would given you a very clear, discernible three-dimensional plane of sound, that you (thanks to your brain’s sound-interpretation algorithm) would be able to clearly understand and make peace with, as you nod off to sleep.
But, if I were to place a voice recorder in your lap that recorded all this ambient noise as you slept – when you later played that recording back using your favoured earphones, you’d actually only hear a jumble of noise, that while discernible, may at best sound two-planar and flat – nowhere as real-life as if you’d been awake to experience it first hand.
Here’s the kicker – it’s not the voice recorder, or your earphones, or even you brain’s fault – so it’s not the hardware so much as a technique in which audio is captured, that helps you fully enjoy audio in all it’s three-dimensional glory.
Despite all the development in materials and hardware, how binaural audio recording (i.e. audio that involves both your ears) can be recreated in realistic three-dimensional form has long been a dilemma for the industry.
Experiments and techniques abound, including the implementation of microphones embedded in some crazy fake ears. This has become a common way of recording binaural audio, but it’s not the only way, nor the best approach.
It’s akin to how most TVs today convert regular visuals into three-dimensional ones – artificially, digitally. And those are obviously not the same as movies recorded in the 3D! So, there has to be a way to record audio in 3D too.
Well, new Kickstarter product, OpenEars, from a company called Binauric could make recording of binaural audio easier than ever. OpenEars takes a novel approach of building microphones into in-ear headphones. And there’s another twist.
Many binaural microphones try to simulate the shape and density of the human head in order to reproduce the way sound actually reaches our ears. OpenEars sidesteps it by using your own head (we do mean physically, not metaphorically) and lets you simply place the microphones in the right spot.
So, if you like recording videos using your smartphone, this product could well be for you, as it’ll allow you (and your friends) to enjoy real-life video and audio recorded on a whim!
How does OpenEars enable that? Well, the Bluetooth headphones include microphones in the headphones, and a mode called HearThrough allows mixing in live sound from the environment, along with music you’re listening to if you want. This makes it safer to ride a bike while listening to music or performing any activity while enjoying audio through your earphones – allowing you to be fully aware of your surrounding environments, thereby mitigating any untoward surprises.
To me, this product feels inevitable.
Binaural mics that go into your ears have existed for a while and range from USD 60–500, however they can’t be used with most (maybe even all) smartphones, as the average microphone jack supports a mono signal, while stereo is a prerequisite for binaural recording.
I own a couple pairs of these, but I never carry them around because that would mean also carrying around something to record with, like a bulky H4n Zoom. Not so with OpenEars.
And you have the advantage of them being headphones also, so when you want to record something, they’re already in your ears. For a suggested retail price of around USD 225, this is just a little bit more than you would pay for a nice pair of in-ear binaural mics.
Today, binaural audio is mostly used in music, sound design, and niche YouTube communities. Making it easier to record 3D sound directly to your phone could open up the idea to a more mainstream audience. Imagine if every Snapchat you received was recorded in binaural! The immersive quality of 3D audio would literally add another dimension to video on social networks.
Just wait, the binaural wave is coming.
This isn’t Binauric’s first foray into speaker-mic hybrids. Its first product was a Bluetooth speaker and binaural microphone called Boom Boom. Although I haven’t tried OpenEars yet, I have friends who have been playing with Boom Boom and will vouch for both its sound quality and design.
Binauric says OpenEars will be compatible with GoPro cameras, potentially adding an aural dimension to POV extreme sport videos.
Binauric has even created special mics called OpenMics, which can be mounted on a helmet.
Binauric planned to ship to the first 500 backers by November with mass production scheduled for March, but it’s a Kickstarter product, so that may change at the drop of a hat.
One additional downside — because it’s using a unique Bluetooth protocol for processing high-quality stereo audio, it has to use a special app to record. The app is fine, but I want to use these mics for everything: Snapchat, Vine, Hyper-lapse, Instagram, FaceTime, Skype. So even if Binauric’s headphones pan out, my dream of binaural Snapchats is in the hands of phone and app makers who would have to work with this protocol, and maybe one day binaural can reach the masses.
Left behind by the likes of Apple Pay, Android Pay and Samsung Pay – who have so far being the world leaders in mobile & contactless payments, Visa is finally feeling the pinch.
Thus Visa recently announced that it’s own tool for contactless payments – a pair of NFC-enabled sunglasses! Now that sounds uber-cool!
They would appear to be quite like a regular pair of sunglasses, but with the addition of a smal