With Google I/O 2018 just a few days away, we are close to getting a glimpse at the next iteration of Google’s Material Design. And while everyone expects that the new version will be named Material Design 2, I hope it won’t be the case. A refreshed Material Design document hints at an even more weird name: MD refresh. I am not entirely sure that’s a better name than Material Design 2 though.
Rounded Corners Everywhere
If you want to know where Material Design is headed, then download the Canary version of Chrome and change chrome://flags/#top-chrome-md flag to Refresh. After you do that, restart the browser and you’ll be ready to enjoy the new Firefox. Pardon me, the new experimental Chrome UI that looks similar to the old Firefox interface.
To get the full experience, change the following flags too:
If you hope that this is just an experiment, then, as a proud owner of a Chromebook, I’ll tell you that this is not an experiment. Rounded corners seem to be Google’s latest love affair. The new Gmail interface has them and search results on mobile devices have them too. And while the new interface elements are quite subtle, they change the look and the feel of Google’s user experience. Apparently, this is the result of the shift to mobile devices and touch-enabled screens because this is Google’s top priority.
And if you still hope that rounded corners are not the future, then have a look at the above-mentioned Material Design document where you’ll see where the new concept is going.
I’ve already mentioned the rounded tabs, but you’ll also notice that straight and rigorous lines are gone, and curves are everywhere.
Online Email Template Builder
With Postcards you can create and edit email templates online without any coding skills! Includes more than 100 components to help you create custom emails templates faster than ever before.
Other changes include the relocation of the new tab button to the far left of the interface. The Profile icon is also moved to the right side of the address bar. It’s clear that all the upcoming changes are made to improve touch-enabled devices.
When Will Material Design Refresh (2) Be Announced?
While there is no official announcement yet, it’s probably a done deal. We’ll have to wait for May’s Google I/O developer conference to find out when the changes are going to be pushed to the stable version of Chrome. Goodbye, my mouse, you have been the one, but touch is the new trend.
Like what you’re reading? Subscribe to our top stories.
What Google’s Search Relations team has been doing in response to COVID-19 [Video] – Search Engine Land
Google’s John Mueller recently stated in a webmaster hangout that W3C validation has no impact on search results.
Mueller was asked whether W3C validation errors could slow down the time it takes to download a page.
In response, Mueller says W3C validation has no impact on the time to download a page, and has no impact on a website in search results in general.
“No, this does not affect time to download a page. Time to download a page is purely the time that it takes from Googlebot asking your server for a URL, to your server having provided that full content to Googlebot.
What is on that page is totally irrelevant, other than maybe if you have a lot of text then maybe it will take a long time to transfer. But HTML errors are totally irrelevant for that.”
As it relates to search results, Mueller says this about W3C validation:
“In general, the W3C validation is something that we do not use when it comes to search. So you don’t need to worry if your pages kind of meet the validation bar or not. However, using the validation is a great way to double check that you’re not doing anything broken on your site.”
Here’s why Mueller recommends using the W3C validator:
“So, in particular, for other kinds of devices for people who need accessibility features, the W3C validator is a great way to kind of get a confirmation that the markup you’re providing is pretty reasonable, and is something that most consumers of markup will be able to understand well.
So I definitely recommend checking out the validator tool and trying it on your pages and seeing what the results are, and then trying to improve things so you’re a little bit more in line with valid HTML.
That generally makes things a lot easier when it comes to displaying your pages, when it comes to understanding the content on your pages for things like screen readers. All of that makes it a lot easier if you have reasonable HTML.”
The browser Google Chrome (v80) is following in the footsteps of Mozilla Firefox (v72) and Apple Safari (v12.1) for notifications—websites that ask for opt-in immediately will now be only able to use quiet notification prompts.
These prompts are far less visible than the standard prompts that show up below the address bar. What’s more, in Chrome, users can now receive all opt-in requests quietly if they choose.
Many brands—retailers and publishers, in particular—have experienced tremendous success with web notifications. For instance, Asda’s George.com gets an astonishing 40% conversion rate with notifications on abandoned carts and a 27% clickthrough rate on segmented alerts.
While web browsers give users more control, brands must adapt. Here are five ways of dealing with these changes:
1. Be clear about the benefits of opting-in
What value does your website messaging offer? Will subscribers get exclusive content or offers or get alerts when their product shipped? It’s key to highlight such value in a soft-prompt before triggering the browser’s actual notification prompt.
2. Provide granular preferences
Offer visitors a preference center for them to customize settings to receive only notifications they truly want. For instance, a merchant may offer notifications for daily flash sales, weekly specials, new product arrivals and/or transaction updates. More control over notifications equals more customer happiness.
3. Don’t rush the “ask”
Like needy people, needy brands are a turn-off. Therefore, consider waiting until they’ve taken an action that signals interest before asking them to opt-in. Have they looked at a promotion, watched a video or searched for a specific product? Pinpoint the moment when asking for the opt-in will streamline the customer journey instead of stalling it.
4. Test various flows
Web opt-ins are often the largest addressable audiences for brands, hence marketers don’t want to wait too long before making the ask. You should continuously A/B test your opt-in prompts, including timing, language, and offers. While browsers will judge your site by opt-in rate, brands should be focused on better long-term engagement, more conversions, higher frequency, and greater lifetime value.
5. Reward opens
Last but not least, notifications have become central to the customer experience for both apps and mobile platforms, which explains why the opt-in rate for apps exceeds 50%. Website marketers should reward customers for notification engagement. For example, they can offer double loyalty point days, early access to the biggest deals or notifications when wish list items go on sale.
Mike Stone is the SVP of marketing at Airship.
One of the most significant advantages of a VPN for SEO is, it helps see search results country-wise. More on why there’s nothing better than a VPN.
We may have no power over Google’s SERP elements but here’s how digital marketers can boost their clients’ organic visibility through “People also ask”.
Cyber criminals earn $3.25 billion per year. Isn’t this a good reason why you should consider encryption to save your business online? Key details shared.
International SEO is not a cookie-cutter approach. Must-know tips that can help you pass the tricky differences in language, culture, and search habits.
Google’s Danny Sullivan tweeted a response regarding an article written by Dr. Pete Meyers. The article, published on Moz, was about an increase in search features that push down the traditional ten blue links. Danny raised interesting issues with the article that deserve to be considered.
What is an Organic Listing?
The first point Danny Sullivan discussed was the definition of an organic listing.
The article defines an organic listing as the traditional ten blue links that link to a web page. Everything else it described as “organic components” or “technically organic” as a way to single them apart from the ten blue links, which the article regards as organic listings.
To be clear, I respect the work you do. And I appreciate some of the things you highlight in the article. It’s just difficult on the other end to see the first example shows an organic result at the top of the page but read it is 2,938x down because it’s not a web link.
“Your customers probably won’t understand that organic isn’t just web pages if you continue to use organic to mean that. Saying organic listings are “technically” that way or have a “component” — sorry, but it feels like it feeds misunderstandings and confusion.”
“My concern is people who don’t take care to read come away with the idea that organic has diminished when there is organic all over the page. It potentially keeps people thinking backward rather than forward.”
Move Forward Not Backwards
I believe that by “people thinking backward” Danny means clinging to the idea that SERPs are ten blue links and ignoring opportunities latent in rich search features.
“Thinking forward” may the understanding that featured snippets, videos and so on represent opportunities to rank in a different way and get more traffic.
I know for myself that when I search for the name of a song I often look for the green Spotify icon so that I can click that and listen to the song while in the car. That green Spotify icon isn’t a part of the ten blue links but it is immensely useful.
Vague Search Queries
Moz’s example of the “worst-case” is a search for the phrase “lollipop.” The report notes that a user must scroll 2,938 pixels to reach the traditional “blue links” organic listings.
But according to Danny, you don’t have to scroll nearly 3,000 pixels for organic listings. There are multiple organic listings at the top of the page.
“…when I read something like “While featured snippets are technically considered organic” or the idea that for “Lollipop” that the first listing isn’t the big video listings at the very top of the page, there seem to be some problematic assumptions…”
“Featured snippets aren’t “technically” organic listings. They are organic listings. And ignoring things listings that appear in Top Stories, businesses in local, programs in college displays feels like a dated assessment of how search works….”
Here’s a screenshot of the search results for the search phrase, lollipop:
As you can see in the above screenshot, Google’s search result satisfies five search intents.
An organic video listing of the song.
Lyrics for the song
Links to music services that offer the song
Link to search results about the song
Link to search results about lollipop the candy.
Search and Search Intents
Satisfying the search intent for a one-word search phrase is difficult because there is likely going to be multiple search intents.
Google has to identify the most popular intent. In this case it appears to be the song, Lollipop. Then Google must satisfy the related and alternate search intents (lyrics, listen on a music service, band information and lollipop the candy).
If you look at the screenshot, it’s evident that Google successfully satisfies five search intents for that one-word keyword phrase.
Search isn’t about linking to websites. That’s the means to the ends. The ends in search is about satisfying search intents. Sometimes that means a link to Spotify. Sometimes users are satisfied by a link to a video.
An Alternate Look:
The following are my thoughts about the article. They’re not meant to be criticisms. They are just thoughts that occurred to me as I read the article.
1. Keywords in the Article are Vague
Basing a study on keywords with vague search intent literally guarantees that the search results will show features like People Also Ask, local business listings, videos, links to music services and so on.
As was pointed out, ten blue links are not as useful for satisfying multiple search intents for vague queries.
2. Keyword Examples in Article are Not Head Terms
This is Moz’s stated methodology:
“While the keywords in this data set are distributed across a wide range of topics and industries, the set skews toward more competitive “head” terms. “
Judging by the keyword phrases used in the article as examples, the keywords used in the study are short phrases but are not necessarily head terms.
Head terms are phrases that have a large search volume. What constitutes a head term is defined entirely by the search volume, how often a query is searched.
Moz appears to apply the label “head term” to search phrases that are short but are not necessarily popular.
This is a common issue with how head terms are considered. It is assumed that short phrases of one or two words have a high search volume.
The definition of a head term has nothing to do with how many words are in the search query. It’s 100% about search volume.
Because people are using more conversational search queries, it could very well be that the vague queries in Moz’s study are not head terms but simply vague terms, which will naturally skew the results toward SERPs with features designed to solve for multi search intent.
Google Trends Evidence
I checked to see if the Moz article search queries were indeed head terms. I compared two of Moz’s search phrases, lollipop and vacuum cleaners in Google Trends against a known popular phrase, iPhone Case.
As you can see in the Google Trends graph above, two of the search queries from the Moz article have relatively low search volume compared to the popular phrase iPhone case.
Moz’s phrases are short and vague and contain multiple search intents. They are arguably not head terms because by definition a head term has a high search volume.
By contrast, the search query “iPhone case” is a true head term.
Below is a screenshot of a Google search results page for that head term:
As can be seen in the above screenshot, Google shows ads followed by the ten blue links. The reason Google is showing the ten blue links is presumably becaues the search phrase is unambiguous.
Some may point to Google’s search features like local boxes, videos and carousels as if those features are a bad thing because they push down the ten blue links.
But the reason Google shows features is to satisfy search intents, to meet the needs of the user.
My suggestion is that perhaps these search features that supposedly make the search results “worse” serve a purpose and can also result in search traffic.
Because conversational search queries contain multiple words the Moz study can arguably be said to not be representational of the state of Google search results, because the methodology is “skewed” to short phrases.
Going by the examples provided by the Moz article, it appears that the research uses short and vague queries. This results in a skewed outcome dominated by SERPs with multiple search features designed to help users with a diverse set of search intents.
It could be argued that an even-handed study would include conversational search.
Are Blue Links More Useful?
It may arguably be unreasonable to assert that search results comprised of ten blue links is the best way to present a complex search result for a vague query consisting of multiple search intents.
The Moz article presumes that the ten blue links are the listings that matter and that search features get in the way.
This is implied from the very first sentence:
“Being #1 on Google isn’t what it used to be.”
Moz’s definition of #1 is in the context of the ten blue links. The Moz article goes on to say:
“The worst case scenario, a search for “Disney stock,” pushed #1 all the way down to 976px.”
The assumption is that the blue links are important and that everything else that gets in the way of those blue links make the search results “worse.”
The Moz article states:
“It feels like the plight of #1 is only getting worse.”
“Search is about serving info; sometimes a web page isn’t the best source.
Providing refinement options helps users narrow to better info, which helps sites….”
The purpose of the different features is to provide answers for queries that have multiple search intents in a manner that is easily navigated to. That’s useful.
The article itself acknowledges the usefulness of search features at the very end:
“…many rich features are really the evolution of vertical results, like news, videos, and images, that still have an organic component. In other words, these are results that we can potentially create content for and rank in, even if they’re not the ten blue links we traditionally think of as organic search.”
The article notes that there are organic results in the various search features. It also acknowledges there are opportunities in those search features.
So it’s kind of puzzling that the article spends so much time making the case that search features pushing down the ten blue links makes the search results “worse.”
On January 13th, MozCast measured significant algorithm flux lasting about three days (the dotted line shows the 30-day average prior to the 13th, which is consistent with historical averages) …
That same day, Google announced the release of a core update dubbed the January 2020 Core Update (in line with their recent naming conventions) …
On January 16th, Google announced the update was “mostly done,” aligning fairly well with the measured temperatures in the graph above. Temperatures settled down after the three-day spike …
It appears that the dust has mostly settled on the January 2020 Core Update. Interpreting core updates can be challenging, but are there any takeaways we can gather from the data?
How does it compare to other updates?
How did the January 2020 Core Update stack up against recent core updates? The chart below shows the previous four named core updates, back to August 2018 (AKA “Medic”) …
While the January 2020 update wasn’t on par with “Medic,” it tracks closely to the previous three updates. Note that all of these updates are well above the MozCast average. While not all named updates are measurable, all of the recent core updates have generated substantial ranking flux.
Which verticals were hit hardest?
MozCast is split into 20 verticals, matching Google AdWords categories. It can be tough to interpret single-day movement across categories, since they naturally vary, but here’s the data for the range of the update (January 14–16) for the seven categories that topped 100°F on January 14 …
Health tops the list, consistent with anecdotal evidence from previous core updates. One consistent finding, broadly speaking, is that sites impacted by one core update seem more likely to be impacted by subsequent core updates.
Who won and who lost this time?
Winners/losers analyses can be dangerous, for a few reasons. First, they depend on your particular data set. Second, humans have a knack for seeing patterns that aren’t there. It’s easy to take a couple of data points and over-generalize. Third, there are many ways to measure changes over time.
We can’t entirely fix the first problem — that’s the nature of data analysis. For the second problem, we have to trust you, the reader. We can partially address the third problem by making sure we’re looking at changes both in absolute and relative terms. For example, knowing a site gained 100% SERP share isn’t very interesting if it went from one ranking in our data set to two. So, for both of the following charts, we’ll restrict our analysis to subdomains that had at least 25 rankings across MozCast’s 10,000 SERPs on January 14th. We’ll also display the raw ranking counts for some added perspective.
Here are the top 25 winners by % change over the 3 days of the update. The “Jan 14” and “Jan 16” columns represent the total count of rankings (i.e. SERP share) on those days …
If you’ve read about previous core updates, you may see a couple of familiar subdomains, including VeryWellHealth.com and a couple of its cousins. Even at a glance, this list goes well beyond healthcare and represents a healthy mix of verticals and some major players, including Instagram and the Google Play store.
I hate to use the word “losers,” and there’s no way to tell why any given site gained or lost rankings during this time period (it may not be due to the core update), but I’ll present the data as impartially as possible. Here are the 25 sites that lost the most rankings by percentage change …
Orbitz took heavy losses in our data set, as did the phone number lookup site ZabaSearch. Interestingly, one of the Very Well family of sites (three of which were in our top 25 list) landed in the bottom 25. There are a handful of healthcare sites in the mix, including the reputable Cleveland Clinic (although this appears to be primarily a patient portal).
What can we do about any of this?
Google describes core updates as “significant, broad changes to our search algorithms and systems … designed to ensure that overall, we’re delivering on our mission to present relevant and authoritative content to searchers.” They’re quick to say that a core update isn’t a penalty and that “there’s nothing wrong with pages that may perform less well.” Of course, that’s cold comfort if your site was negatively impacted.
We know that content quality matters, but that’s a vague concept that can be hard to pin down. If you’ve taken losses in a core update, it is worth assessing if your content is well matched to the needs of your visitors, including whether it’s accurate, up to date, and generally written in a way that demonstrates expertise.
We also know that sites impacted by one core update seem to be more likely to see movement in subsequent core updates. So, if you’ve been hit in one of the core updates since “Medic,” keep your eyes open. This is a work in progress, and Google is making adjustments as they go.
Ultimately, the impact of core updates gives us clues about Google’s broader intent and how best to align with that intent. Look at sites that performed well and try to understand how they might be serving their core audiences. If you lost rankings, are they rankings that matter? Was your content really a match to the intent of those searchers?