When
you first bought your device, it came with a specific version of its
operating system. Maybe Android 4.4 KitKat, maybe iOS 7, both of which
released in 2013. When those OS versions came out, they were developed
with a certain set of hardware specs in mind.
Fast-forward to
today and overall hardware specs have drastically improved. Features are
added to both Android and iOS, and these improvements are made with the
newer hardware specs in mind. As such, newer versions of an OS require
more computing power and resources for a smooth experience.
In
other words: if you have a 2013-era device that came with Android 4.4
KitKat and upgraded it to Android 7.0 Nougat, you simply don’t have
enough juice to handle all of the extra overhead. Hence, the device
feels slower. What can you do about it? Not much,
unfortunately. Feel free to apply minor upgrades (e.g. from Android 7.0
to 7.1) but avoid major upgrades (e.g. from Android 7.1 to 8.0). Keep
your device in the era it came with, and upgrade the device itself if
you want to take advantage of a newer OS version.
2. App Updates
While
all types of software can succumb to something called “feature creep” —
the continued adding of new-yet-arguably-unnecessary features — mobile
apps are some of the worst offenders. Even so-called “lightweight” apps
can quickly grow bloated over time.
But the real tragedy is that
most developers aren’t mindful of the resources used by their apps. In
fact, as overall device hardware improves, developers tend to get lazier
as far as resource management goes. Over time, apps tend to eat up more
RAM and CPU but your hardware stays the same, so performance feels
slower.
Take
an app like Spotify and compare how it is now to what it was like back
in 2014. The 2014 version would run perfectly fine on today’s phones,
but today’s version of Spotify would likely sputter on a 2014-era phone.
Apply this to all apps on your device and it’s easy to see why it may
seem slower now. What can you do about it? As
apps grow bloated, you can replace them with lighter-weight
alternatives. Likely offenders include note-taking apps, media apps,
social network apps, and office apps. In some cases, an older version of
an app might be available. So long as it doesn’t have any glaring
security issues, it might suit your device better than the latest
version.
3. Background Apps
Another reason why your phone
feels slower is that you have more apps installed now than when you
first got the device. If you don’t believe me, go to your phone’s
settings and look at all of your downloaded apps. Most people think
they’ve only installed 10 or so apps, but are often surprised to see
closer to 40 or 50.
The
problem is that some apps run in the background although you aren’t
actively using them. For example, email apps are always checking for new
incoming emails, messaging apps are always awaiting new messages,
note-taking apps are always syncing, etc. Even animated live wallpapers and home screen widgets need resources to do what they do.
What can you do about it? Identify which apps are draining battery
as heavy battery use tends to indicate heavy background processing.
Switch to a static wallpaper and avoid relying on widgets. Uninstall
apps you don’t use. Disable background processing in apps that allow it.
All smartphones and
tablets run on flash memory, which is a type of solid-state storage
medium with no moving parts. The most common type of flash memory is
called NAND. While NAND is fast and affordable, it does have a few
quirks that can impact performance.
First,
NAND memory grows slower as it fills up. The exact mechanisms behind
this are beyond the scope of this article, but suffice it to say that
NAND memory needs a certain amount of “empty blocks” to operate at peak
data-writing performance. The speed loss with full storage can be
significant.
Second, NAND memory degrades with use. There are three kinds of NAND memory
— SLC, MLC, TLC — but they all have write cycle limits per memory cell.
When the limit is reached, the cells wear out and impact performance.
And since your device is always writing data, deterioration is
unavoidable.
NAND and eMMC: All You Need to Know About Flash MemoryNAND and eMMC: All You Need to Know About Flash MemoryThe world would be a
sad place without flash memory. But how much about flash memory do you
really understand? Here are the essentials you should know to make
informed buys!Read MoreNote that TLC is a type of NAND memory pioneered by
Samsung. It’s the cheapest to produce but has the worst durability:
4,000 write cycles per cell versus 10,000 in the more standard MLC type.
This might be why Samsung devices have a reputation for slowing down
more than non-Samsung devices.
What can you do about it?
We recommend staying under 75 percent of your device’s total storage
capacity. If your internal storage is 8 GB, don’t cross the 6 GB
threshold. This can also help extend the life of cells through a technique called “wear leveling,” thus delaying performance degradation.
In spite of all the above, your device might simply feel slower because you perceive it to be slower, not because it has actually slowed down.
There’s
an interesting phenomenon where search traffic for “phone slow” spikes
after new phone releases and big OS updates. Nobody knows for sure what
this means, but one interpretation is that when something new comes out,
what you have right now suddenly seems worse.
Furthermore, as the people around you upgrade their devices, and as you acquire other
devices in your household (e.g. a brand new laptop), your baseline for
good performance goes up. Your Galaxy S3 Mini may have been “amazing” at
one point, but now that your standards and expectations have risen,
it’s now “a piece of junk.”
What can you do about it? Learn to accept it or upgrade your device. Android users could flash a new, light-weight ROM.
You may not have heard of HTTP/2 yet, but it’s the most recent
update to HTTP. The new protocol standard introduces some new concepts
and makes communication between servers and applications faster and more
efficient.
What Is HTTP/2?
HyperText Transfer Protocol Version 2, or HTTP/2, is the first major update to HTTP in 15 years.
The
previous protocol standard, HTTP/1.1, has been in use since 1997 and
uses a mix of clunky workarounds to improve on the limitations of HTTP.
It
is based on SPDY (“speedy”), an open-source experiment started by
Google to address some of the issues and limitations of HTTP/1.1
The Internet Engineering Task Force (IETF) specifies the changes like this in Hypertext Transfer Protocol version 2, Draft 17:
“HTTP/2
enables a more efficient use of network resources and a reduced
perception of latency by introducing header field compression and
allowing multiple concurrent exchanges on the same connection […]
“It
also allows prioritization of requests, letting more important requests
complete more quickly, further improving performance.”
“HTTP/2 also enables more efficient processing of messages through use of binary message framing.”
“This
specification is an alternative to, but does not obsolete, the HTTP/1.1
message syntax. HTTP’s existing semantics remain unchanged.”
HTTP/2 Is Based on SPDY
By
2012, most modern browsers and many popular sites (Google, Twitter,
Facebook etc.) already supported SPDY. As the popularity of SPDY was
increasing, the HTTP Working Group (HTTP-WG) started working on updating the HTTP standard.
From this point onward, SPDY became the foundation and experimental branch for new features in HTTP/2. At the time, we examined how SPDY can improve browsing. Since then, the version 2 standard was drafted, approved and published.
What Is SPDY, And How Can It Maximize Your Browsing Experience?What Is SPDY, And How Can It Maximize Your Browsing Experience?Read MoreMany of the features from SPDY were incorporated
into of HTTP/2, and Google eventually stopped supporting this protocol
in early 2016.
Most browsers eventually stopped supporting SPDY, and as there are no alternatives, HTTP/2 is becoming the de facto standard.
While
the HTTP/2 protocol standard is not strictly backward compatible with
HTTP/1, compatibility can be achieved via translation. An HTTP/1.1 only
client won’t understand an HTTP/2 only server and vice versa, which is
why the new protocol version is HTTP/2 and not HTTP/1.2.
That
said, an important part of the work provided by HTTP-WG, is to make sure
HTTP/1 and HTTP/2 can be translated back and forth without any loss of
information.
Any new mechanisms or features introduced will also be version-independent, and backward-compatible with the existing web.
HTTP/2
isn’t really something a user can implement, but there are things we
can do to affect our browsing speed. Do you believe any of these common myths to speed up your internet speed?
HTTP/2
comes with some great updates to the HTTP standard. Some of the more
important ones are binary framing, multiplexing, stream prioritization,
flow control, and server push.
Binary Framing
HTTP Messages by mfuji09is licensed under CC-BY-SA 2.5.
Following the update to HTTP2/, the HTTP protocol
communication is split up into an exchange of binary-encoded frames.
These frames are mapped to messages that belong to a particular stream.
The streams are then multiplexed (woven together in a sense) in a single
TCP connection.
The new binary framing layer introduces some new terminology; Streams, Messages, and Frames.
Streams are bidirectional flows of bytes that carry one or more messages.
Each of these streams has a unique identifier and can carry bidirectional messages using optional priority information.
Frames are the smallest unit of communication in HTTP/2 that contain
specific sets of data (HTTP headers, message payloads etc.). The header
will at minimum identify the stream that the frame belongs to.
Messages are a complete set of frames that map to a logical request or response message.
Each message is a logical HTTP message, like a request or responses, made up of one or more frames.
This allows us to use a single TCP connection, for what in the past required multiple.
Multiplexing
HTTP/1.1
ensures that only one response can be delivered at a time per
connection. And the browser will open additional TCP connections if the
client wants to make multiple parallel requests.
HTTP/2
removes this limitation of HTTP/1.1 and enables full requests and
response multiplexing. This means that the client and server can break
down an HTTP message into independent frames, which are then
interleaved, and reassembled at the other end.
Overall, this is
the most important enhancement of HTTP/2, as it will in part eliminate
the need for multiple connections. This will in turn introduce numerous
performance benefits across all web technologies.
The reduced
number of connections means fewer Transport Layer Security (TLS)
handshakes, better session reuse, and an overall reduction in client and
server resource requirements. This makes applications faster, simpler
and cheaper to deploy.
Websites with many external assets (images or scripts) will see the largest performance gains from HTTP/2 multiplexing.
Stream Prioritization and Dependency
Further
improvements of the multiplexed streams are made with weight and stream
dependencies. HTTP/2 allows us to give each stream a weight (a value
between 1 and 256), and make it explicitly dependent on another stream.
This
dependency and weight combination leads to the creation of a
prioritization tree, which tells the server how the client would prefer
to receive responses.
The server will use the information in the
prioritization tree to control the allocation of CPU, memory, and other
resources, as well as the allocation of bandwidth to ensure the client
receives the optimal delivery of high-priority responses.
Flow Control
Issues
with flow control in HTTP/2 are similar to HTTP/1.1. However, since
HTTP/2 streams are multiplexed within a single TCP connection, the way
flow control in HTTP/1.1 works is no longer efficient.
In short,
flow control is needed to stop streams interfering with each other to
cause a blockage. This makes multiplexing possible. HTTP/2 allows for a
variety of flow-control algorithms to be used, without requiring
protocol changes.
No algorithm for flow control is specified in
HTTP/2. Instead, a set of building blocks has been provided to aid
clients and servers to apply their own flow control.
You can find the specifics of these building blocks in the “Flow Control” section of the HTTP/2 internet-draft.
Server Push
Your
browser will normally request and receive an HTML document from a
server when first visiting a page. The server then needs to wait for the
browser to parse the HTML document and send a request for the embedded
assets (CSS, JavaScript, images, etc.).
In HTTP/1.1, the server
cannot send these assets until the browser requests them, and each asset
requires a separate request (i.e multiple handshakes and connections).
Server
push will reduce latency by allowing the server to send these resources
without prompt, as it already knows that the client will require them.
So in the example above, the server will push CSS, JavaScript (a common scripting language in web pages), and images to the browser to display the page quicker.
What Is JavaScript and How Does It Work?What Is JavaScript and How Does It Work?What is Javascript?
It's a programming language used to enhance web pages. It includes
dynamically updating web pages, user interfaces and more. Let's dive
into what Javascript is all about.Read MoreBasically, server push allows a server to send multiple responses for a single client request.
Albeit
manually, this is the effect we currently get by inlining CSS or JS
into our HTML documents—we are pushing the inlined resource to the
client without waiting for the client to request it.
This is a big step away from the current HTTP standard of strict one-to-one request-response workflow.
The Limitations of HTTP/2
SPDY
had a slightly stricter policy on security and required SSL encryption
for all connections. HTTPS/2 does not require encryption but many
services will not serve HTTP/2 without SSL.
All major browsers
support HTTP/2, but none of them will support it without encryption. The
CanIUs website has a great table overview over the current browser support for HTTP/2, as seen above.
The backward compatibility and translations between HTTP/1.1 and HTTP/2 will slow down page load speed.
There
is no real reason why encryption shouldn’t be a default or mandatory
setup by now. If you already have an SSL certificate on your site, you
can improve the security of your HTTPS website by enabling HSTS.
Is HTTP/2 the Next Big Thing?
HTTP/2
was proposed as a standard in mid-2015, and most browsers added support
for it by the end of that year. HTTP/2 already affects the way that the
internet works and how applications and servers talk together.
There
are no requirements to force the use of HTTP/2, but so far it only
serves benefits and no drawbacks. It’s also a fairly minor change from a
user perspective, one that people won’t really notice.
According to W3Tech, 31.7% of the top 10 million websites currently support HTTP/2. The quickest way for most of you to enable HTTP/2 on your website is to use Cloudflare’s CDN.
The next proposed standard (HTTP/3) is already in the works and is based on QUIC,
another experimental project by Google. In October of this year, IETF’S
HTTP-WG and the QUIC Working Group officially requested QUIC to become
the new worldwide standard and to rename it HTTP/3.
No comments:
Post a Comment