Chrome tests Google side search in the browser
Google Chrome is now testing side search, a new feature that makes it easier to compare search results on a single browser page. “We’re experimenting with a new side panel in the Chrome OS Dev channel, so you can view a page and the search results at the same time,” Google announced on the Chromium blog.
This side search feature lets you view a page right in your main browser window without needing to navigate back and forth or losing your search results or with the need to use more tabs. “The goal of this experiment is to explore how Chrome can better help users easily compare results,” Google said.
How it works. First, you need to be in the Chrome OS Dev channel on the desktop to see. To open the side panel and view the search results, click on the G icon next to the search bar at the top left. Again, this is a test Google is trying on a beta version of Chrome.
Read more: Chrome tests Google side search in the browser
Google Ads launches new budget report
What the budget report shows. The budget report shows daily spend, your campaign’s monthly spending limit (solid grey line), your monthly spend forecast (dotted blue line), cost to date (solid blue line), and any budget changes you’ve made during that particular month.
How to access the budget report. To see the budget report, you’ll first need to have a campaign with a date range that includes the current month. The budget report is accessible from the Campaigns page, the shared library, and the Ad groups page.
Read more: Google Ads launches new budget report
Google pushes back FLoC testing to Q1 2022
The timeline divides initiatives into four categories (“fight spam and fraud on the web,” “show relevant content and ads,” “measure digital ads,” and “strengthen cross-site privacy boundaries”). APIs shown on the timeline are based on Google’s current expectations and are subject to change. The timeline will be updated monthly.The phases indicated on the timeline are as follows:
- Discussion – The technologies and their prototypes are discussed in forums such as GitHub or W3C groups.
- Testing – All technologies for the use case are available for developers to test and may be refined based on results.
- Ready for adoption – Once the development process is complete, the successful technologies are ready to be used at scale. They will be launched in Chrome and ready for scaled use across the web.
- Transition period: Stage 1 – APIs for each use case are available for adoption. Chrome will monitor adoption and feedback carefully before moving to next stage.
- Transition period: Stage 2 – Chrome will phase out support for third-party cookies over a three-month period finishing in late 2023.
Read more: Google pushes back FLoC testing to Q1 2022
How AI is making information more useful
Making multimodal search possible with MUM
Earlier this year at Google I/O, we announced we’ve reached a critical milestone for understanding information with Multitask Unified Model, or MUM for short.
We’ve been experimenting with using MUM’s capabilities to make our products more helpful and enable entirely new ways to search. Today, we’re sharing an early look at what will be possible with MUM.
In the coming months, we’ll introduce a new way to search visually, with the ability to ask questions about what you see.
Helping you explore with a redesigned Search page
We’re also announcing how we’re applying AI advances like MUM to redesign Google Search. These new features are the latest steps we’re taking to make searching more natural and intuitive.
First, we’re making it easier to explore and understand new topics with “Things to know.” Let’s say you want to decorate your apartment, and you’re interested in learning more about creating acrylic paintings. If you search for “acrylic painting,” Google understands how people typically explore this topic, and shows the aspects people are likely to look at first. For example, we can identify more than 350 topics related to acrylic painting, and help you find the right path to take.
Get more from videos
We already use advanced AI systems to identify key moments in videos, like the winning shot in a basketball game, or steps in a recipe. Today, we’re taking this a step further, introducing a new experience that identifies related topics in a video, with links to easily dig deeper and learn more.
Read more: How AI is making information more useful
A shoppable TV screen with YouTube
Whether kicking back with a movie or kicking their fitness routine into gear, more people are choosing to experience YouTube on the big screen. When they do, they can watch longer, enjoy multiple shows back to back, and experience it all from the comfort of their couch with friends and family. Many even build a routine around it. In the U.S., over 120 million people streamed YouTube or YouTube TV on their TV screens in December 2020.1
To help consumers more easily learn about the products and services they’re interested in, we’re making YouTube ads on connected TVs more shoppable. Today, we’re expanding Video action campaigns to CTVs to help advertisers drive more online sales or generate leads, and grow their business.
With a quarter of logged-in YouTube CTV viewers watching primarily on TVs, the living room is becoming an essential place for brands to drive incremental conversions with new audiences. In early experiments for Video action campaigns on TV screens, over 90% of conversions coming from CTV would not have been reachable on mobile and desktop devices.
When a viewer sees a Video action campaign on their TV, they are invited through a URL at the bottom of their screen to continue shopping on the brand’s website from their desktop or mobile device — without interrupting their viewing session.
Advertisers can also take advantage of the Conversion Lift beta on TV screens to get actionable results in real-time. Conversion Lift measures the impact of YouTube ads on driving user actions, such as website visits, sign-ups, purchases, and other types of conversions.
Read more: A shoppable TV screen with YouTube