The Mozilla BlogKeoni Mahelona on promoting Indigenous communities, the evolution of the Fediverse and data protection

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Keoni Mahelona, a builder behind technologies that aim to protect and promote Indigenous languages and knowledge. We talked with Keoni about his current work at Te Hiku Media, the challenges of preserving Indigenous cultures, big tech and more.

So first off, what inspired you to do the work you’re doing now with Te Hiku Media?

Mahelona: I sort of started at the organization cause my partner, who’s the CEO, needed help with doing a website. But then the website turned into an entire digital platform, which then turned into building AI to help us do the work that we have to do, but I guess the most important thing is the alignment of values with like me as a person and as a native Hawaiian with the values of the community up here — Māori community and the organization. Having the strong desire for sovereignty for our land, which has been the struggle we’ve been having now for hundreds of years. We’re still trying to get that back, both in Aotearoa and in Hawaii, but also sovereignty for our languages and our data, and pretty much everything that encompasses us in our communities. And it was really clear that the work that we do at Te Hiku is very important for the community, but also that we needed to maintain sovereignty over that work. And if we made the wrong choices with how we store our data, where we put our data, what platforms we use, then we would cede some of that sovereignty over and take us further back rather than forward.

What were (and are) some of those challenges that you guys had to overcome to be able to create those tools? I feel like a lot of people might not know those challenges and how you have to persevere through those things to create, to preserve.

Sure, the lack of data is a challenge that big tech seem to overcome quite easily with their billions of dollars, whether they’re stealing it at scale or paying people for it at scale. They have the resources to do that and litigate if they need to, because of theft, and they’re just doing what America did right? Stole our land at scale. So for us, actually, we knew that the data would be the hardest part, but not so much like getting the data, or whether the data existed — there’s a vibrant community of language speakers here — the hard part was going to be, how do we protect the data that we collect? And even now, I worry because there’s just so many bots online scraping stuff, and we see bots trying to sort of log into our online forms. And I’m thinking hopefully these are just bots trying to log into a form because it sees the form, versus someone who knows that we’ve got some valuable data here, and if they can get in, they could use that data to add Māori to their models and profit off of that. When you have organizations like Microsoft and Google making hundreds of millions off of selling services to education and government in this country, you know that would be a valuable corpus for them — I’m not saying that they would sort of steal, I don’t know, I’d hope not, but I feel like OpenAI would probably do something like that.

And how do we overcome that? We just tried. We did the best we could do, given the resources we had to ensure that things are safe, and we think they’re relatively safe, although I still get anxiety about it. Some of the other challenges we face are being a bunch of brown people from a community, so there’s the stereotype associated with the area with anyone who might maybe sort of associates to this place. So there were people like, “Ha, you guys can’t do this.” And we proved them wrong. They were even funders who were Māori, who actually thought, “These guys are crazy, but you know what, this is exactly what we need to find. We need to find like people who are crazy and who might actually pull this off because it would be quite beneficial.” 

We’ve had other people inquire as to why our organization got science funding to do science research. I actually have a master’s in science — I actually have two masters in science, although one’s a business science degree, whatever that means — but there was this quite racist media organization on the south island of this country who did an official Information Act request on our organization, saying, “Why is this Māori media company getting science-based funding? They don’t know anything about science.” We actually had a scientist at our organization, and they didn’t, so this is some of the more interesting challenges that we’ve come across in this journey of going from radio broadcasting and television broadcasting to actually being a tech company and doing science and research. So it’s the racism and the discrimination that we’ve had to overcome as well. In some cases, we think we’ve been denied funding because our organization is Māori, and we’ve had to often do the hard work first off the smell of an oily rag, as they say here, to prove that we are capable of doing the work for people to recognize that, yeah, they can actually fund us. And that we can deliver results based on the stipulations of the fund or whatever when you’re getting science-based funding grants and stuff like that. I think we’ve shown the government that you don’t need to be a large university to actually do good research and have an impact in the science community. But it certainly hasn’t been easy.

<figcaption class="wp-element-caption">Keoni Mahelona at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

I imagine even with how long you’ve been there and how long you guys have been doing this, that there’s still an ongoing feeling of anxiety that’s extremely frustrating.

We’re a nonprofit, so a lot of our money comes from government funding, and we’re also a broadcaster, so we have public broadcasting funding that fades some of the work we do and then you science-based funding. 

The New Zealand political environment right now is absolutely terrible. There have been hundreds, probably thousands, of job cuts in the government. The current coalition government needs to raise something like three billion dollars for tax cuts for landlords, and in order to do that, they’re just slashing a lot of funding and projects and people’s jobs in government. There’s this rhetoric that’s been peddled that the government is quite inefficient, and we’re just hemorrhaging money and all these stupid positions and things like that. So that also gives us an anxiety, because a changing government might affect funding that is available to our organization. So we also have to deal with that as being a charity and not sort of being a capitalist organization.

The other thing that gives us anxiety is the inevitable, right? I actually think it’s inevitable, unfortunately, that these big tech companies will eventually be able to sort of replicate our languages. They won’t be good. They’ll never be good and good to the point where it will truly benefit and move our people forward. But they will be good enough that they will be able to profit from it. It profits by giving it that reputation of providing that service, ensuring you continue to go to Google, where you’re then served ads, and so they’re not selling the translation, but they are selling ads alongside it for profit, right? We see this essentially happening with a lot of Indigenous languages, where there is enough data being put online that these mostly American big tech corporations will profit from. And the sad thing is that it was the Americans in the first place and these other colonial nations that fought to make our languages extinct. And now their corporations stand to profit from the languages that they tried to make extinct. So it’s really terrible.

How do you think some of these bigger corporations can be more respectful, inclusive, and supportive of Indigenous communities?

That’s an interesting question. I guess the first question is, should they be inclusive? Because sometimes the best thing to do is just stay away and let us get on with it. We don’t need your help. The unfortunate reality is that so many of our people are on Facebook and are on Google, or whatever — the platforms are so dominating or imperialist that we have to use them in some cases, and because English is the dominant language on these platforms, especially for many Indigenous communities where they are colonized by English-speaking nations, it means that you’re just going to continue to be bombarded with English and not have a space if you don’t go out of your way to make a space and to sort of speak your language. It’s a bit of a catch-22, but I think it’s up to the communities to figure that one out because we could collectively come together as community and be like, “We’re not. We never expect Facebook or whatever to support our language and all these other tech companies and platforms.” And that’s fine, let’s go out into our own environment in our own communities and speak in languages rather than trying to rely on these tech companies to sort of do it for us, right?

There are ways that they can actually just kind of help, but like, stay out of our business.

And that’s the better way to do it, because this sort of outsider coming in trying to save us, it just doesn’t work. I’ve been advocating that you have to support these communities to lead the solutions and what they see is best for their people, because Google doesn’t know what’s best for these communities. So they need to support the communities, and I don’t mean by like building the language technologies themselves and selling it back to them, that is not the support I’m talking about. The support is staying away or giving them discounts on resources or giving them resources so that they can build, and they can lead, because then you’re also upskilling them. 

What do you think is the biggest challenge that we face in the world this year on and offline? And how do we combat it?

I see stuff happening to the Fediverse, which is interesting. Something that happened recently was some guy who very much knows and in his blog post identified as a tech bro from Silicon Valley, made the universal decision that the best thing to do for everybody is to hook up Threads and the Fediverse, so that people in Threads can access stuff in Mastodon etc., and then likewise the other way around. And this is like a single dude who apparently had talked to people and decided it was his duty or mission to connect Threads to the Fediverse, and it was just like, are you joking? And then there’s this other thing going on now, where there are these similar types of dudes getting angry at some instances for blocking other instances because they have people who are like racist or misogynist, and they’re getting angry at these moderators who are doing what the point of the Fediverse is, right? Where you can create a safe space and decide who gets to come in and who doesn’t. What I’m getting at is, I think that as the Fediverse kind of grows, it’s going to be interesting to see what sort of problems comes and how the things that we wanted to escape by leaving Twitter and jumping on Mastodon are kind of coming in. And I think that’s going to be interesting to see how we deal with that.

This is again where the incompatibility of capitalism and general communities sort of comes to play because if we have for-profit companies trying to do Fediverse stuff, then essentially, we’re going to get what we already have, because ultimately, at the end of the day you’re trying to maximize for profit. So long as the internet is a place where we have dominating companies trying to maximize for profit, we’re just always going to have more problems, and it’s absolutely terrible and frightening.

But yeah, politics and I think the evolution of the Fediverse are probably the thing that I would be most concerned about. Then there’s also the normal stuff, which is just the theft of data and privacy. 

What is one action that you think everybody should take to make the world and our online lives a little bit better?

I think they should just be more cognizant of the data they decide to put online and don’t just think about how that data affects you as an individual, but how does it affect those who are close to you? How does it affect the communities to which you belong? And how does it affect other people who might be similar to you in that way? 

People need to be respectful of the data and others data and think about their actions online in respect to being good stewards of all data — their own data from their communities, data of others. And whether you should download this thing or steal that thing or whatever. And that’s essentially what I think is my message, for everyone, is to be respectful, but think about data as you would think about your environment and taking care of it and respecting.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

The fall of capitalism, I guess. The restoration of the Hawaiian nation — I can continue. Ultimately, I think a lot of problems come back to some very fundamental ways in which society has structured itself.

What gives you hope about the future of our world?

I think actually this younger generation. I had this impression coming out of high school going to university and then kind of seeing the new generation coming through and being confused having perceptions through your generations. When we live stream high school speeches … just the stuff that these kids talk about is amazing. And even sometimes you’re like having a bit of a cry, because it’s so good in terms of the topics they talk about. But to me, that gives me hope there are actually like some really amazing people and young people who will someday fill our shoes and be politicians. That gives me hope that these people still exist despite all the negative stuff that we see today. That’s what I’m hopeful for.

Get Firefox

Get the browser that protects what’s important

The post Keoni Mahelona on promoting Indigenous communities, the evolution of the Fediverse and data protection appeared first on The Mozilla Blog.

The Mozilla BlogHow AI is redefining your sports experience right now

Artificial intelligence has the potential to transform many sectors across the economy. The sports realm isn’t excluded from that. 

New technologies are already changing the way sports are played, viewed and consumed. From enhancing athlete recovery, to live performance tracking and influencing rule changes, AI’s plunge into sports is providing leagues more data and analysis than they’ve ever had.

Let’s start with football. The NFL has arguably led the way in integrating AI into sports. The league has partnered with Amazon Web Services (AWS) since 2017, and at the beginning of 2024 created the Digital Athlete, a tool using AI and machine learning to “build a complete view of players’ experience, which enables NFL teams to understand precisely what individual players need to stay healthy, recover quickly, and perform at their best.” The technology collects data from multiple sources, including game day data using AWS, and essentially takes video and data from training, practice and in-game action. It then uses AWS technology to “run millions of simulations of NFL games and specific in-game scenarios” to identify which players are at the highest risk of injury. Teams use that information to develop injury prevention, training and recovery regimens. The technology was used by all 32 teams this past NFL season.

AI and the NFL’s relationship goes beyond the Digital Athlete initiative. In March, the league implemented a new kickoff rule after predictive analysis identified plays and body positions that most likely lead to injuries. The process included capturing data through chips in players’ shoulder pads, Brian Rolapp, chief media and business officer for the NFL, recently explained at The Washington Post Futurist Tech Summit, which was sponsored by Mozilla.

Consumers get a chance to experience the league’s AI investment, too. During the NFL and Amazon’s “Thursday Night Football” TV broadcasts, viewers have the option to watch games with “Prime Vision,” an alternate broadcast powered by Next Gen stats, a real-time player and ball tracking data system. Prime Vision can do fun things like highlight a potential blitzing defender on a play based on what happens before the ball is snapped. 

At other levels of football, AI is prevalent for the next generation of players with dreams of making it to the NFL. Exos, a sports science-driven performance company based in Arizona, has been employing AI technology for NFL draft hopefuls in the pre-draft training process for years. Several of the top picks in recent years have traveled to Exos’ facility in Phoenix to complete training, shed time off their 40-yard dashes and improve their vertical leaps, among other things. 

“When an athlete arrives, we take them through a robust sports science evaluation process,”  said Anthony Hobgood, Exos’ senior director of performance. “This evaluation gives us critical information about the athletes’ force profile, muscle-to-bone ratio and fundamental movement qualities. For example, some athletes will run faster by putting on more muscle, while others’ performance could be negatively impacted by putting on more muscle. The data we collect allows our team to make informed decisions about the game plans we build for our athletes. Our speed coaches have a combined total of over 40 years of experience training over 1,500 NFL draft prospects. When an athlete decides to train at Exos, they can be confident they are getting the best system ever created for NFL draft preparation.”

This training has paid off: From 2015-2023, Exos has produced 743 draft picks, an average of 83 per year, including 127 first-rounders. Last spring, almost every NFL team except one (Atlanta) drafted an Exos-trained athlete. 

“Our system has been tried and tested for over 25 years and uses data and the latest science in order to ensure our athletes have the very best,” Exos VP of Sport Adam Farrand said.

The NBA has used AI for some time as well. This February, it debuted a generative AI feature at its All-Star tech summit called NB-AI, which aims to enhance and personalize the live game experience for fans. The technology can make game highlights look like an animated superhero movie — think a film based on a certain bug that resides in New York. Here’s how it sketches people:

“Today, AI is creating a similar excitement to what we saw around the early days of the internet,” NBA Commissioner Adam Silver said at the presentation. “Intuitively, most of us have a sense that artificial intelligence is going to change our lives. The question is, ‘How?’”

The WNBA also utilizes similar tech as the NFL’s Digital Athlete, obtaining three-dimensional player and ball-tracking data through its partnership with Genius Sports’ Second Spectrum. WNBA coaches and front office leaders have access to analytical tools, including shot quality, maximum speed and defensive matchup data. 

While the world of baseball has long stood behind its history and tradition, it has also stepped into the revolution of AI — in fun and strategic ways. 

Baseball has been using data and AI to aid in scouting players, player development, injury risk assessment, video analysis and game strategy. There are even AI chatbots that can create scouting reports for MLB players and evaluate them on metrics AI believes are the best representatives of their abilities. Minor league baseball clubs are embracing Uplift Labs, which uses mobile movement tracking and 3D analysis tech for scouting players. The system uses mobile devices to “accurately capture athletic movements in any environment, gaining insights into performance optimization.” 

In February, the Houston Astros were among the first MLB clubs to introduce facial recognition technology to allow fans into their ballpark. (The New York Mets were the first to do this, in 2021.) The San Francisco Giants even used AI and machine learning to understand what giveaway products they should offer fans for game promos.

AI’s capabilities in the sports world are only expanding as the technology evolves at an extraordinary pace. This shift provides major sports leagues opportunities to continue to improve their product on and off the field, while giving fans an exciting way to experience the games they love.

But there’s still a human element in sports we can’t ignore as these advancements continue. The human aspect is what makes sports so great, after all. While AI can provide teams data about why a basketball player is struggling to shoot the ball well, for example, it has limitations and can’t replace a coach evaluating that player’s performance on video, talking and empathizing with them and coaching them through their struggles. The human interaction — not AI — builds trust between athletes and coaches to navigate those situations.  Or, while a team like the Giants can certainly utilize AI to determine fan giveaways, going to a tailgate and talking directly to fans about what they’d like to see incorporated at games is a better route. AI can never bench the human side of the sports experience, it should be utilized as a resource for leagues, players and coaches while still prioritizing the human element.

Protecting these sports, while following laws and regulations, is important to remember as the excitement around these tools grows. The work teams and leagues need to do to preserve the history and human side of these sports while progressing them forward and ensuring ethical AI is powered is critical.

Device-level encryption from a name you can trust

Try Mozilla VPN

The post How AI is redefining your sports experience right now appeared first on The Mozilla Blog.

Firefox NightlyScreenshots++ – These Weeks in Firefox: Issue 160

Highlights

  • The screenshots component pref just got enabled and is riding the trains in 127! This is a new implementation of the screenshots feature with a number of usability, accessibility and performance improvements over the original.
  • Thanks to Joseph Webster for creating a brand new JWPlayer video wrapper (bug) and for adding more sites under this wrapper to expand Picture-in-Picture captions support (bug).
    • New supported sites include AOL, C-SPAN, CPAC, CNBC, Reuters, The Independent, Yahoo and more!
  • Irene landed the first part of refreshed text formatting controls for Reader Mode. Check them out by toggling reader.improved_text_menu.enabled (bug 1880658)
    • A panel in Firefox's Reader Mode is shown for controlling layout and text on the page. The panel lets users control the content width, line spacing, character spacing, word spacing, and text alignment of the text in reader mode.
  • New tab wallpapers have landed in Nightly and will be released as an experiment in en-US. If you’d like to enable wallpapers, set browser.newtabpage.activity-stream.newtabWallpapers.enabled to true.
    • Firefox's New Tab page with a beautiful image of the aurora borealis set as the background wallpaper

      Set a new look for new tabs!

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Camille
  • gravyant
  • Itiel Joseph
  • Webster
  • Magnus Melin [:mkmelin]
  • Meera Murthy
  • Steve P

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Starting from Firefox 127, installing new single-signed add-ons is disallowed (while already installed single-signed add-ons are still allowed to run). This behavior is currently only enabled in Nightly (Bug 1886157) but it is expected to be extended to all channels later in the 127 cycle (Bug 1886160)
  • Fixed a styling issue hit by extensions options pages embedded in about:addons when the Dark mode is enabled (Bug 1888866)
WebExtensions APIs
  • As part of the ongoing work related to improving cross-browser compatibility for Manifest Version 3 extensions:
    • Customized keyboard shortcuts associated to _execute_browser_action command for Manifest Version 2 extensions will be automatically associated to the _execute_action command when the same extension migrates to Manifest Version 3 (Bug 1797811). This way, the custom keyboard shortcut will keep working as expected from a user perspective.
    • DNR rule limits have been raised to match the limits enforced by other browsers (Bug 1803370)
    • DNR getDynamicRules and getSessionRules API methods will be accepting the additional ruleIds filter as a parameter and improve compatibility with DNR API in more recent Chrome versions (Bug 1820870)
  • Improved errors logged when a content script file does not exist (Bug 1891502)
    • the error is now expected to look like Unable to load script: moz-extension://UUID/path/to/script.js

Developer Tools

DevTools
  • Julian reverted a change a few months ago so DevTools screenshots are saved in the same location as Firefox screenshots (#1845037)
  • Alex fixed a Debugger crash (#1891699)
  • Nicolas fixed a visual glitch in the Debugger (#1891681)
  • Alex fixed an issue where Network request from iframe sent just before document destruction were not displayed in the Netmonitor (#1887852)
  • Nicolas replaced DevTools JS-based CSS lexer with a Rust-based version, using the same cssparser crate than Stylo (#1887638, #1892895)
    • This brought a ~10% performance improvement when displaying rules in the inspector (#1888607 + #1890552)
  • Thanks to :willdurand, we finally released a new version of the DevTools ADB extension used by about:debugging. The extension is now shipping with notarized binaries and can be used on recent macOS versions. (#1821449)
WebDriver BiDi
  • Thanks to gravyant who implemented a new helper Assert.isInstance to check whether objects are instances of specific classes (#1870880)
  • Henrik updated mozrunner/mozprocess to use “psutil” and support the new application restart mechanism on macos (#1884401)
  • Sasha added support for the a11y attributes locator for the browsingContext.locateNodes command (#1885577)
  • Sasha added support for the devicePixelRatio parameter for the browsingContext.setViewport command (#1857961)
  • Henrik improved the way we check if an element is disabled when using the WebDriver ElementClear command (#1863266)
  • Julian updated the vendored puppeteer version to v22.6.5, which enables new network interception features in Puppeteer using WebDriver BiDi (#1891762)

Migration Improvements

New Tab Page

  • Work continues on a weather widget for new tab (borrowing logic from URL bar). Stay tuned!

Privacy & Security

  • We’re working on a new anti-tracking feature: Bounce Tracking Protection. It works similar to the existing Cookie Purging feature in Firefox, but instead of a tracker list it relies on heuristics to detect bounce trackers.
    • It’s based on the navigational-tracking-protections spec draft in the PrivacyCG
    • Bug 1877432 first enabled the feature in Nightly in “dry run mode” where we don’t purge tracker storage but only collect telemetry. We’re looking to fully enable it in Nightly soon once we think it’s stable enough.

Profile Management (new this week!)

  • We’re getting underway with improvements to multiple profiles support in Firefox!
  • Eng discussion on Matrix: #fx-profile-eng
  • Backend work in toolkit/profile behind a build flag (MOZ_SELECTABLE_PROFILES)
  • Frontend work in browser/components/profiles behind a pref (browser.profiles.enabled)
  • Metabug is here: 1882882
  • Bugs landed so far:
    • Mossop added telemetry to record the version of the profiles database on startup and the number of profiles in it (bug 1878339)
    • Niklas added the profiles browser component (bug 1883143)
    • Niklas added profiles menu items to the app menu (bug 1883155)
  • Coming soon: Docs, final UX, and good-first-bugs

Screenshots

Search and Navigation

Storybook/Reusable Components

Anne van KesterenUndue base URL influence

The URL parser has many quirks due to its origins in a time where conformance test suites were atypical and implementation requirements were hidden in the examples section. Some consider these quirks deeply problematic, but personally I don’t really mind that one can write a hundred slashes after a scheme instead of two and get identical results. Sure, it would be better if that were not the case, but in the end it is something that is normalized away and therefore does not impact the fundamental aspects of the URL ecosystem.

I was reminded the other day that there is one quirk however that does yield rather undesirable results. In particular for certain (non-conforming) inputs, the result will not be failure, but the exact URL returned will depend on the presence and type of base URL. This might be best explained with examples:

InputBase URL (serialized)Output (serialized)
https:testhttps://test/
https:testhttp://example/https://test/
https:testhttps://example/https://example/test
hello:testhello:test
hello:testbye://example/hello:test
hello:testhello://example/hello:test

This quirk only impacts so-called special schemes, which include http and https. And only when they match between the input and base URL. As a user of URLs you could work around this quirk by first parsing without a base URL and only if that returns failure, parse a second time with a base URL. That does have the unfortunate side effect of being inconsistent with the web platform (for non-conforming input), but depending on your use case that might be okay.

I remember looking into whether this could be removed completely many years ago, but websites relied on it and end users trump theory.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: April 2024 Progress Report

Welcome to our monthly report on turning K-9 Mail into Thunderbird for Android! Last month you could read about how we found and fixed bugs after publishing a new stable release. This month we start with… telling you that we fixed even more bugs.

Fixing bugs

After the release of K-9 Mail 6.800 we dedicated some time to fixing bugs. We published the first bugfix release in March and continued that work in April.

K-9 Mail 6.802

The second bugfix release contained these changes:

  • Push: Notify user if permission to schedule exact alarms is missing
  • Renamed “Send client ID” setting to “Send client information”
  • IMAP: Added support for the \NonExistent LIST response attribute
  • IMAP: Issue EXPUNGE command after moving without MOVE extension
  • Updated translations; added Hebrew translation

I’m especially happy that we were able to add back the Hebrew translation. We removed it prior to the K-9 Mail 6.800 release due to the translation being less than 70% complete (it was at 49%). Since then volunteers translated the missing bits of the app and in April the translation was almost complete.

Unfortunately, the same isn’t true for the Korean translation that was also removed. It was 69% complete, right below the threshold. Since then there has been no significant change. If you are a K-9 Mail user and a native Korean speaker, please consider helping out.

F-Droid metadata (again?)

In the previous progress report we described what change had led to the app description disappearing on F-Droid and how we intended to fix it. Unfortunately we found out that our approach to fixing the issue didn’t work due to the way F-Droid builds their app index. So we changed our approach once again and hope that the app description will be restored with the next app release.

Push & the permission to schedule alarms

K-9 Mail 6.802 notifies the user when Push is enabled in settings, but the permission to schedule exact alarms is missing. However, what we really want to do is ask the user for this permission before we allow them to enable Push.

This change was completed in April and will be included in the next bugfix release, K-9 Mail 6.803.

Material 3

As briefly mentioned in March’s progress report, we’ve started work on switching the app to Google’s latest version of Material Design – Material 3. In April we completed the technical conversion. The app is now using Material 3 components instead of the Material Design 2 ones.

The next step is to clean up the different screens in the app. This means adjusting spacings, text sizes, colors, and sometimes more extensive changes. 

We didn’t release any beta versions while the development version was still a mix of Material Design 2 and Material 3. Now that the first step is complete, we’ll resume publishing beta versions.

If you are a beta tester, please be aware that the app still looks quite rough in a couple of places. While the app should be fully functional, you might want to leave the beta program for a while if the look of the app is important to you.

Targeting Android 14

Part of the necessary app maintenance is to update the app to target the latest Android version. This is required for the app to use the latest security features and to cope with added restrictions the system puts in place. It’s also required by Google in order to be able to publish updates on Google Play.

The work to target Android 14 is now mostly complete. This involved some behind the scenes changes that users hopefully won’t notice at all. We’ll be testing these changes in a future beta version before including them in a K-9 Mail 6.8xx release.

Building two apps

If you’re reading this, it’s probably because you’re excited for Thunderbird for Android to be finally released. However, we’ve also heard numerous times that people love K-9 Mail and wished the app would stay around. That’s why we’ve announced in December to do just that.

We’ve started work on this and are now able to build two apps from the same source code. Thunderbird for Android already includes the fancy new Thunderbird logo and a first version of a blue theme.

But as you can see in the screenshots above, we’re not quite done yet. We still have to change parts of the app where the app name is displayed to use a placeholder instead of a hard-coded string. Then there’s the About screen and a couple of other places that require app-specific behavior.

We’ll keep you posted.

Releases

In April 2024 we published the following stable release:

The post Thunderbird for Android / K-9 Mail: April 2024 Progress Report appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 546

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is derive_more, a crate for deriving a whole lot of traits

Thanks to teor for the suggestion!

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

426 pull requests were merged in the last week

Rust Compiler Performance Triage

Largely uneventful week; the most notable shifts were considered false-alarms that arose from changes related to cfg-checking (either cargo enabling it, or adding cfg's like rustfmt to the "well-known cfgs list").

Triage done by @pnkfelix. Revision range: c65b2dc9..69f53f5e

3 Regressions, 2 Improvements, 3 Mixed; 5 of them in rollups 54 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-05-08 - 2024-06-05 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Rust and its borrow checker are like proper form when lifting boxes. While you might have been lifting boxes "the natural way" for decades without a problem, and its an initial embuggerance to think and perform proper lifting form, it is learnable, efficient, and prevents some important problems.

Or more succinctly:
C/C++: It'll screw your back(end).

And the reply:

  1. there’s a largish group of men who would feel their masculinity attacked if you implied they should learn it
  2. while it's learnable finding usefully targeted educational resources are hard to come by
  3. proper form while lifting boxes are a really terrible way to model graphs

Brett Witty and Leon on Mastodon

Thanks to Brett Witty for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Addons BlogDeveloper Spotlight: Port Authority

Port Authority gives you intuitive control over global block settings, notifications, and allow-list customization.

A few years ago a developer known as ACK-J stumbled onto a tech article that revealed eBay was secretly port scanning their customers (i.e. scanning their users’ internet-facing devices to learn what apps and services are listening on the network). The article further claimed there was nothing anyone could do to prevent this privacy compromise. ACK-J took that as a challenge. “After going down many rabbit holes,” he says, “I found that this script, which was port scanning everyone, is in my opinion, malware.”

We spoke with ACK-J to better understand the obscure privacy risks of port scanning and how his extension Port Authority offers unique protections.

Why does port scanning present a privacy risk?

ACK-J: There is a common misconception/ignorance around how far websites are able to peer into your private home network. While modern browsers limit this to an extent, it is still overly permissive in my opinion. The privacy implications arise when websites, such as google.com, have the ability to secretly interact with your router’s administrative interface, local services running on your computer and discover devices on your home network. This behavior should be blocked by the same-origin policy (SOP), a fundamental security mechanism built into every web browser since the mid 1990’s, however due to convenience it appears to be disabled for these requests. This caught a lot of people by surprise, including myself, and is why I wanted to make this type of traffic “opt-in” on my devices.

Do you consider port scanning “malware”? 

ACK-J: I don’t necessarily consider port scanning malware, port scanning is commonplace and should be expected for any computer connected to the internet with a public IP address. On the other hand, devices on our home networks do not have public IP addresses and instead are protected from this scanning due to a technology called network address translation (NAT). Due to the nature of how browsers and websites work, the website code needs to be rendered on the user’s device (behind the protections put in place by NAT). This means websites are in a privileged position to communicate with devices on your home network (e.g. IOT devices, routers, TVs, etc.). There are certainly legitimate use cases for port scanning even on internal networks, the most common being communicating with a program running on your PC such as Discord. I prefer to be able to explicitly allow this type of behavior instead of leaving it wide open by default.

Is there a way to summarize how your extension addresses the privacy leak of port scanning?

ACK-J: Port Authority acts in a similar manner to a bouncer at a bar, whenever your computer tries to make a request, Port Authority will verify that the request is not trying to port scan your private network. If the request passes the check it is allowed in and everything functions as normal. If it fails the request is dropped. This all happens in a matter of milliseconds, but if a request is blocked you will get a notification.

Should Port Authority users expect occasional disruptions using websites that port scan, like eBay?

ACK-J: Nope, I’ve been using it for years along with many friends, family, and 1,000 other daily users. I’ve never received a single report that a website would not allow you to login, check-out, or other expected functionality due to the extension blocking port scans. There are instances where you’d like your browser to communicate with an app on your PC such as Discord, in this case you’ll receive an alert and could add Discord to an allow-list or simply click the “Blocking” toggle to disable blocking temporarily.

Do you see Port Authority growing in terms of a feature set, or do you feel it’s relatively feature complete and your focus is on maintenance/refinement?

ACK-J: I like extensions that serve a specific purpose so I don’t see it growing in features but I’d never say never. I’ve added an allow-list to explicitly permit certain domains to interact with services on your private network. I haven’t enabled this feature on the public extension yet but will soon.

Apart from Port Authority, do you have any plans to develop other extensions?

ACK-J: I actually do! I just finished writing up an extension called MailFail that checks the website you are on for misconfigurations in their email server that would allow someone to spoof emails using their domain. This will be posted soon!


Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Port Authority appeared first on Mozilla Add-ons Community Blog.

Firefox Developer ExperienceFirefox DevTools Newsletter — 125

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 125 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like Artem Manushenkov who updated the Debugger Watch Expressions panel input field placeholder (#1619201)

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Pop it up!

Firefox 125 adds support for the Popover API, which is now supported across all major browsers 🎉. As said on the related MDN page:

The Popover API provides developers with a standard, consistent, flexible mechanism for displaying popover content on top of other page content. Popover content can be controlled either declaratively using HTML attributes, or via JavaScript.

In HTML, popover elements can be declared with a popover attribute. The popover can then be toggled from a button element which specifies a popovertarget attribute referencing the id of the popover element.


Firefox DevTools Inspector markup view. We can see a button with a popovertarget attribute and next to it a "select element" button. A div element with a popover attribute is displayed as well<figcaption class="wp-element-caption">Inspector displayed on https://mdn.github.io/dom-examples/popover-api/blur-background/</figcaption>

In the Inspector markup view, an icon is displayed next to the popovertarget attribute so you can quickly jump to the popover element.
Popover element can be toggled in Javascript HTMLElement.showPopover, HTMLElement.hidePopover and HTMLElement.
togglePopover. beforetoggle and toggle elements are fired when a popover element is toggled, and the Debugger provides those events in the Event Listeners Breakpoints panel.

Note that we don’t display ::backdrop pseudo-element rules yet, but will be soon (target is Firefox 127, see #1893644)

Performance

As announced in the last newsletter, we’re focusing on performance for a few months to provide a fast and snappy experience to our beloved users. We’re happy to report that the Style Editor panel is now up to 20% faster to open (#1884072).

Chart where x is the time and y is duration, where we can see the values going from 750ms to 600ms around March 14th<figcaption class="wp-element-caption">Performance test duration going from ~750ms to ~600ms</figcaption>

We also improved the Debugger opening when a page contains a lot of Javascript sources (#1880809). In a specific case, we could spend around 9 whole seconds to process the different sources and populate the sources tree (see the 124 Firefox profile). In 125, it now only take a bit more than 600 milliseconds, meaning it’s now 14 times faster (see the 125 Firefox profile).

Firefox Profiler Flame chart screenshot for the same function, on Firefox 124 and 125.

This also shows up on less extreme cases: our performance tests reported an average of 3% improvement on Debugger opening.

Debugger

There is now a button indicating if the opened file is an original file or a bundle, or if there was an issue when trying to retrieve the Source Map file (#1853899).

Firefox debugger with a tsx file opened. At the bottom of the file text, there a button saying "original file". A popup menu is opened and has the following items: - Enable Source Maps - Show and open original location by default - Jump to the related original source - Open the Source Map file in a new tab

Clicking on the button opens a menu dedicated to Source Map, where you can:

  • enable or disable Source Map
  • indicate if the Debugger should open original files by default
  • select the related original/bundle source
  • open the .map file in a new Firefox tab

We also fixed a glitch around text selection and line highlighting (#1878698), as well as an issue which was preventing the Outline panel to work properly (#1879322). Finally we added back the preference that allows to disable the paused debugger overlay (#1865439). If you want to do so, go to about:config , search for devtools.debugger.features.overlay and toggle it to false.

Miscellaneous

  • CSP error messages in the Console now provide the effective directive (#1848315)
  • Infinity wasn’t visible in the Console auto-completion menu (#1698260)
  • Clicking on a relative URL of an image in the Inspector now honor the document’s base URL (#1871391)
  • An issue that could provoke crashes of the Network Monitor is now fixed (#1884571)

Thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

The Rust Programming Language BlogRust participates in OSPP 2024

Similar to our previous announcements of the Rust Project's participation in Google Summer of Code (GSoC), we are now announcing our participation in Open Source Promotion Plan (OSPP) 2024.

OSPP is a program organized in large part by The Institute of Software Chinese Academy of Sciences. Its goal is to encourage college students to participate in developing and maintaining open source software. The Rust Project is already registered and has a number of projects available for mentorship:

Eligibility is limited to students and there is a guide for potential participants. Student registration ends on the 3rd of June with the project application deadline a day later.

Unlike GSoC which allows students to propose their own projects, OSPP requires that students only apply for one of the registered projects. We do have an #ospp Zulip stream and potential contributors are encouraged to join and discuss details about the projects and connect with mentors.

After the project application window closes on June 4th, we will review and select participants, which will be announced on June 26th. From there, students will participate through to the end of September.

As with GSoC, this is our first year participating in this program. We are incredibly excited for this opportunity to further expand into new open source communities and we're hopeful for a productive and educational summer.

Support.Mozilla.OrgMake your support articles pop: Use the new Firefox Desktop Icon Gallery

Hello, SUMO community!

We’re thrilled to roll out a new tool designed specifically for our contributors: the Firefox Desktop Icon Gallery. This gallery is crafted for quick access and is a key part of our strategy to reduce cognitive load in our Knowledge Base content. By providing a range of inline icons that accurately depict interface elements of Firefox Desktop, this resource makes it easier for readers to follow along without overwhelming visual information.

We want your feedback! Join the conversation in our SUMO forum thread to ask questions or suggest new icons. Your feedback is crucial for improving this tool.

Thanks for helping us support the Firefox community. We can’t wait to see how you use these new icons to enrich our Knowledge Base!

Stay engaged and keep rocking the helpful web!

 

The Mozilla BlogJulia Janssen creates art to be an ambassador for data protection

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Julia Janssen, an artist making our digital society’s challenges of data and AI tangible through the form of art. We talked with Julia about what sparked her passion for art, her experience in art school, traveling the world, generative AI and more.

I’m always curious to ask artists what initially sparked their interest early in their lives, or just in general, what sparked their interest in the art that they do. How was that for you?

Janssen: Well, it’s actually quite an odd story, because when I was 15 years old, and I was in high school — at the age of 15, you have to choose the kind of the courses or the direction of your classes, — and I chose higher mathematics and art history, both as substitute classes. And I remember my counselor asked me to his office and said, “why?” Like, “why would you do that like? Difficult mathematics and art and art history has nothing to do with each other.” I just remember saying, like, “Well, I think both are fun,” but also, for me, art is a way to understand mathematics and mathematics is for me a way to make art. I think at an early age, I kind of noticed that it was both of my interests. 

So I started graphic design at an art academy. I also did a lot of different projects around kind of our relationship with technology, using a lot of mathematics to create patterns or art or always calculating things. And I never could kind of grasp a team or the fact that technology was something I was so interested in, but at graduation in 2016, it kind of all clicked together. It all kind of fell into place. When I started reading a lot of books about data ownership rights and early media theories, I was like, “what the hell is happening? Why aren’t we all constantly so concerned about our data? These big corporations are monetizing our attention. We’re basically kind of enslaved by this whole data industry.” It’s insane what’s happening, why is everybody not worried about this?” So, I made my first artwork during my graduation about this. And looking back at the time, I noticed that’s a lot of work during art school was already kind of understanding this — for example, things like terms and conditions. I did a Facebook journal where it was kind of a news anchor of a newsroom, reading out loud timelines and all these things. So I think it was already present in my work, but in 2016, it all kind of clicked together. And from there, things happened.

I found that I learned so much more by actually being in the field and actually doing internships, etc. during college instead of just sitting in the classroom all day. Did you kind of have a similar experience in art school? What’s that like?

Yeah, for sure. I do have some criticisms of how artists teach in schools, I’m not sure if it’s a worldwide thing, but from what I experienced, for example, is that the outside world is a place of freedom — you can express yourself, you can do anything you want. But I also noticed that, for example, privacy data protection was not a typical topic of interest at the time, so most of my teachers didn’t encourage me to kind of stand up for this. To research this, or at least in the way that I was doing it. Or they’d say, “you can be critical to worse tech technology, but saying that privacy is a commodity, that’s not done.” So I felt like, yes, It is a great space, and I learned a lot, but it’s also sometimes a little bit limited, I felt, on trending topics. For example, when I was graduating, everybody was working with gender identities and sustainability, those kinds of these things. And everybody’s focused on it and this was kind of the main thing to do. So I feel like, yeah, there should be more freedom in opening up the field for kind of other interests. 

It’s kind of made me notice that all these kids are worrying so much about failing a grade, for example. But later on in the real world, it’s about surviving and getting your money and all these other things that can go wrong in the process. Go experiment and go crazy, the only thing that you can do is fail a class. It’s not so bad.

There’s a system in place that has been there forever, and so I imagine there’s got to feel draining sometimes and really difficult to break through for a lot of young artists. 

Yeah, for sure. And I think that my teachers weren’t outdated — most of them were semi-young artists in the field as well and then teaching for one day — but I think there is also this culture of “this is how we do it. And this is what we think is awesome.” What I try to teach — so I teach there for, like 6 months as a substitute teacher — I try to encourage people and let them know that I can give you advice and I can be your guide into kind of helping you out with whatever you want, and if I’m not interested in what particularly what you want to, for example, say in a project, then I can help you find a way to make it awesome. How to exhibit it, how to make it bigger or smaller, or how to place it in a room. 

<figcaption class="wp-element-caption">Julia Janssen at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

I’m curious to know for you as an artist, when you see a piece of art, or you see something, what is the first thing that you look for or what’s something that catches your attention that inspires you when you see a piece of art?

Honestly, it’s the way that is exhibited. I’m always very curious and interested in ways that people display research or kind of use media to express their work. When I’m walking to a museum, I get more enthusiastic from all the kind of things like the mechanics behind kind of hanging situations, and sometimes the work that is there to see, also to find inspirations to do things myself and what I find most thrilling. But also, I’m not just an artist who just kind of creates beautiful things, so to say, like I need information, data, mathematics to start even creating something. And I can highly appreciate people who can just make very beautiful emotional things that just exist. I think that’s a completely different type of art. 

You’ve also have traveled a lot, which is one of the best ways for people to learn and gain new perspectives. How much has traveling the world inspire you in the work that you create? 

I think a good example was last year when I went to Bali, which was actually kind of the weirdest experience in my life, because I hated the place. But it actually inspired a lot of things for my new project, mapping the oblivion, which I also talk about in my Rise25 video. Because what I felt in Bali was that this is just like such a beautiful island, such beautiful culture and nature, and it’s completely occupied by tourists and tourist traps and shiny things to make people feel well. It’s kind of hard to explain what I felt like there, but for example, I was there and what I like to do on a holiday or on a trip is just a lot of hiking, just going into nature, and just walk and explore and make up my mind or not think about anything. In Bali, everything is designed to feel at comfort or at your service, and that just felt completely out of place for me. But I felt maybe this is something that I need to embrace. So looking on Google Maps, I got this recommendation to go to a three-floor swimming pool. It was awesome, but it was also kind of weird, because I was looking like out on the jungle, and I felt like this place really completely ruined this whole beautiful area. It clicked with what I was researching about platformification and frictionlessness, where these platforms or technology or social media timelines try to make you at ease and comfortable, and making decisions be effortless. So on music apps, you click on “Jazzy Vibes” and it will kind of keep you satisfied with some Jazz vibes. But you will not be able to really explore kind of new things. It just kind of goes, as the algorithm goes. But it gives you a way of feeling of being in control. But it’s actually a highly curated playlist based on their data. But what I felt was happening in Bali was that I wanted to do something else, but to make myself more at comfort to kind of find the more safe option, I chose that swimming pool instead of exploring. And I did it based on a recommendation of an app. And then I felt constantly in-between. I actually want to go to more local places, but I had questions — What do I find out, is it safe there to eat there? I should’ve gone to easier places where I’ll probably meet peers and people who I can have a conversation or beer with. But then you go for the easier option, and then I felt I was drifting away from doing unexpected things and exploring what is there. I think that’s just a result of the technology being built around us and kind of conforming us with news that we probably might be interested in seeing, playing some music, etc. but what makes us human in the end is to feel discomfort, to be in awkward positions, to kind of explore something that is out there and unexpected and weird. I think that’s what makes the world we live in great.

Yeah, that’s such a great point. There’s so many people that just don’t naturally and organically explore a city that they’re in. They are always going there and looking at recommendations and things like that. But a lot of times, if you just go and get lost in it, you can see a place for what it is, and it’s fun to do that. 

It’s also kind of the whole fear of missing out on that one special place. But I feel like you’re missing out on so much by constantly looking at the screen and finding recommendations of what you should like or what you should feel happy about. And I think this just kind of makes you blind to everything out there, and it also makes us very disconnected with our responsibility of making choices. 

What do you think is the biggest challenge that we face this year online, or as a society in the world, and how do we combat that?

I mean, for me, I think most of the debate about generative AI is about taking our jobs or taking the job of creatives or writers, lawyers. I think the more fundamental question that we have to ask ourselves with generative AI is about how we will live with it, and still be human. Do we just allow what it can do or do we also take some measurements in what is desirable that it will replace? Because I feel like if we’re just kind of outsourcing all of our choices into this machine, then what the hell are we doing here? We need to find a proper relationship. I’m not saying I’m really not against technology not at all, I also see all the benefits and I love everything that it is creating — although not everything. But I think it’s really about creating this healthy relationship with the applications and the technology around us, which means that sometimes you have to chose friction and do it yourself and be a person. For example, if you are now graduating from university, I think it will be a challenge for students to actively choose to write their own thesis and not just generated by Chat GPT and setting some clever parameters. I think small challenges are something that we are currently all facing, and fixing them is something we have to want. 

What do you think is a simple action everyone can take to make the world online and offline a little better? 

Here in Europe, we have the GDPR. And it says that we have the right to data access and that means that you can ask a company to show the data that they collected about you, and they have to show you within 30 days. I do also a lot of teaching in workshops, at schools or at universities with this, showing how to request your own data. You get to know yourself a different way, which is funny. I did a lot of projects around this by making this in art installations. But I think this a very simple act to perform, but it’s kind of interesting to see, because this is only the raw data — so you still don’t know how they use it in profiling algorithms, but it gives you clarity on some advertisements that you see that you don’t understand, or just kind of understanding the skill of what they are collecting about in every moment. So that is something that I highly encourage, or also another thing that’s in line with that is “The right to be forgotten,” which is also a European right. 

We started to Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope people are celebrating in the next 25 years?

That’s a nice question. I hope that we will be celebrating an internet that is not governed by big tech corporations, but based on public values and ethics. I think some of the smaller steps in that can be, for example, currently in Europe, one of the foundations of processing your data is having informed consent. Informed consent is such a beautiful term originating from the medical field where a doctor kind of gives you information about the procedure and the possibility, risks and everything. But on the internet, that is just kind of applied in this small button like, “Hey, click here” and give up all your rights and continue browsing without questioning. And I think one step is to kind of get a real, proper, fair way of getting consent, or maybe even switching it around, where the infrastructure of data collection is not the default, but instead it’s “do not collect anything without your consent.”  

We’re currently in a transition phase where there are a lot of very important alternatives to avoid big tech applications. Think about how Firefox already doing so much better than all these alternatives. But I think, at the core, like all our basic default apps should not be encouraged by commercial driven, very toxic incentives to just modify your data, and that has to do with how we design this infrastructure, the policies and the legislations, but also the technology itself and the kind of protocol layers of how it’s working. This is not something that we can change overnight. I hope that we’re not only thinking in alternatives to avoid kind of these toxic applications, big corporations, but that not harming your data, your equality, your fairness, your rights are the default.

In our physical world, we value things like democracy and equality and autonomy and freedom of choice. But on the internet, that is just not present yet, and I think that that should be at the core at the foundation of building a digital world as it should be in our current world. 

What gives you hope about the future of our world?

Things like Rise25, to be honest. I think it was so special. I spoke about this with other winners as well, like, we’re all just so passionate about what we’re doing. Of course, we have inspiration from people around us, or also other people doing work on this, but we’re still not with so many and to be united in this way just gives a lot of hope. 

Get Firefox

Get the browser that protects what’s important

The post Julia Janssen creates art to be an ambassador for data protection appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing Rustup 1.27.1

The Rustup team is happy to announce the release of Rustup version 1.27.1. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rustup installed, getting Rustup 1.27.1 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get Rustup from the appropriate page on our website.

What's new in Rustup 1.27.1

This new Rustup release involves some minor bug fixes.

The headlines for this release are:

  1. Prebuilt Rustup binaries should be working on older macOS versions again.
  2. rustup-init will no longer fail when fish is installed but ~/.config/fish/conf.d hasn't been created.
  3. Regressions regarding symlinked RUSTUP_HOME/(toolchains|downloads|tmp) have been addressed.

Full details are available in the changelog!

Rustup's documentation is also available in the Rustup Book.

Thanks

Thanks again to all the contributors who made Rustup 1.27.1 possible!

  • Anas (0x61nas)
  • cuiyourong (cuiyourong)
  • Dirkjan Ochtman (djc)
  • Eric Huss (ehuss)
  • eth3lbert (eth3lbert)
  • hev (heiher)
  • klensy (klensy)
  • Chih Wang (ongchi)
  • Adam (pie-flavor)
  • rami3l (rami3l)
  • Robert (rben01)
  • Robert Collins (rbtcollins)
  • Sun Bin (shandongbinzhou)
  • Samuel Moelius (smoelius)
  • vpochapuis (vpochapuis)
  • Renovate Bot (renovate)

The Rust Programming Language BlogAutomatic checking of cfgs at compile-time

The Cargo and Compiler team are delighted to announce that starting with Rust 1.80 (or nightly-2024-05-05) every reachable #[cfg] will be automatically checked that they match the expected config names and values.

This can help with verifying that the crate is correctly handling conditional compilation for different target platforms or features. It ensures that the cfg settings are consistent between what is intended and what is used, helping to catch potential bugs or errors early in the development process.

This addresses a common pitfall for new and advanced users.

This is another step to our commitment to provide user-focused tooling and we are eager and excited to finally see it fixed, after more than two years since the original RFC 30131.

A look at the feature

Every time a Cargo feature is declared that feature is transformed into a config that is passed to rustc (the Rust compiler) so it can verify with it along with well known cfgs if any of the #[cfg], #![cfg_attr] and cfg! have unexpected configs and report a warning with the unexpected_cfgs lint.

Cargo.toml:

[package]
name = "foo"

[features]
lasers = []
zapping = []

src/lib.rs:

#[cfg(feature = "lasers")]  // This condition is expected
                            // as "lasers" is an expected value
                            // of the `feature` cfg
fn shoot_lasers() {}

#[cfg(feature = "monkeys")] // This condition is UNEXPECTED
                            // as "monkeys" is NOT an expected
                            // value of the `feature` cfg
fn write_shakespeare() {}

#[cfg(windosw)]             // This condition is UNEXPECTED
                            // it's supposed to be `windows`
fn win() {}

cargo check:

cargo-check

Custom cfgs and build scripts

In Cargo point-of-view: a custom cfg is one that is neither defined by rustc nor by a Cargo feature. Think of tokio_unstable, has_foo, ... but not feature = "lasers", unix or debug_assertions

Some crates use custom cfgs that they either expected from the environment (RUSTFLAGSor other means) or is enabled by some logic in the crate build.rs. For those crates Cargo provides a new instruction: cargo::rustc-check-cfg2 (or cargo:rustc-check-cfg for older Cargo version).

The syntax to use is described in the rustc book section checking configuration, but in a nutshell the basic syntax of --check-cfg is:

cfg(name, values("value1", "value2", ..., "valueN"))

Note that every custom cfgs must always be expected, regardless if the cfg is active or not!

build.rs example

build.rs:

fn main() {
    println!("cargo::rustc-check-cfg=cfg(has_foo)");
    //        ^^^^^^^^^^^^^^^^^^^^^^ new with Cargo 1.80
    if has_foo() {
        println!("cargo::rustc-cfg=has_foo");
    }
}

Each cargo::rustc-cfg should have an accompanying unconditional cargo::rustc-check-cfg directive to avoid warnings like this: unexpected cfg condition name: has_foo.

Equivalence table

cargo::rustc-cfg cargo::rustc-check-cfg
foo cfg(foo) or cfg(foo, values(none()))
foo="" cfg(foo, values(""))
foo="bar" cfg(foo, values("bar"))
foo="1" and foo="2" cfg(foo, values("1", "2"))
foo="1" and bar="2" cfg(foo, values("1")) and cfg(bar, values("2"))
foo and foo="bar" cfg(foo, values(none(), "bar"))

More details can be found in the rustc book.

Frequently asked questions

Can it be disabled?

For Cargo users, the feature is always on and cannot be disabled, but like any other lints it can be controlled: #![warn(unexpected_cfgs)].

Does the lint affect dependencies?

No, like most lints, unexpected_cfgs will only be reported for local packages thanks to cap-lints.

How does it interact with the RUSTFLAGS env?

You should be able to use the RUSTFLAGS environment variable like it was before. Currently --cfg arguments are not checked, only usage in code are.

This means that doing RUSTFLAGS="--cfg tokio_unstable" cargo check will not report any warnings, unless tokio_unstable is used within your local crates, in which case crate author will need to make sure that that custom cfg is expected with cargo::rustc-check-cfg in the build.rs of that crate.

How to expect custom cfgs without a build.rs?

There is currently no way to expect a custom cfg other than with cargo::rustc-check-cfg in a build.rs.

Crate authors that don't want to use a build.rs are encouraged to use Cargo features instead.

How does it interact with other build systems?

Non-Cargo based build systems are not affected by the lint by default. Build system authors that wish to have the same functionality should look at the rustc documentation for the --check-cfg flag for a detailed explanation of how to achieve the same functionality.

  1. The stabilized implementation and RFC 3013 diverge significantly, in particular there is only one form for --check-cfg: cfg() (instead of values() and names() being incomplete and subtlety incompatible with each other).

  2. cargo::rustc-check-cfg will start working in Rust 1.80 (or nightly-2024-05-05). From Rust 1.77 to Rust 1.79 (inclusive) it is silently ignored. In Rust 1.76 and below a warning is emitted when used without the unstable Cargo flag -Zcheck-cfg.

Don Martian easy experiment to support behavioral advertising

This is a follow-up to a previous post on how a majority of US residents surveyed are now using an ad blocker, and how the survey found that privacy concerns are now the number one reason to block ads.

Almost as long as Internet privacy tools have been a thing, so have articles from personalized ad proponents telling us not to use them, because personalized ads are good actually. The policy debate over personalized (or surveillance, or cross-context behavioral, or tracking-based, or whatever you want to call it) advertising seems to keep repeating an endless argument that on the one hand, personalized advertising causes some risk or cost, I’m not going to summarize the risks or costs here, go read Bob Hoffman’s books or Microtargeting as Information Warfare for more info but on the other hand we have to somehow balance that against the benefits of personalized advertising.

Benefits? Let’s see them. Cross-context behavioral advertising is good for consumers should be straightforward to test. If ad personalization really helps match buyers and sellers in a market, then users of privacy tools and privacy settings must be buying worse products and services. Research should show that the more privacy options you pick, the less happy you are with your stuff. And the more personalized your ad experience is, the more satisfied of a customer you are. This is different from asking whether or not people prefer to have ad personalization turned on. That has been pretty extensively covered, and the answer is that some people do, and some people don’t. This question isn’t about whether people like personalized ads or not, it’s about whether people who get more personalized ads are happier with how they spend their money.

This should be a fairly low-cost project because in general, the companies that do the most personalized advertising are in the best position to do the research to support it. Are users of privacy tools and settings more or less satisfied with the products and services they buy than people who leave the personalized ad options on?

  • Do privacy-protected users give lower ratings to the products they buy?

  • Do privacy-protected users return or stop using more of their purchases?

  • Are privacy-protected users more likely to buy a replacement, competing product after an unsuccessful first purchase in a category?

  • Are privacy-protected users more likely to agree with general statements about a decline in quality and trustworthiness in business in general?

The correlation between more privacy and less satisfied consumer would be detectable from a variety of angles. Vendors of browsers with preferences that affect ad targeting should be able to show that people who turn on the privacy settings are somehow worse off than people who don’t. Anti-adblock companies do research on ad blocker users—so how are shopping experiences different for those users? Any product that connects to a server for updates or telemetry is providing data on how long the buyer chooses to keep using it. And—the biggest opportunity here—any company that has an Apple iOS app (and that’s a lot of companies) should be able to compare satisfaction metrics between customers with App Tracking Transparency (ATT) on or off.

Ad platforms, search engines, social network companies, and online retailers all have access to the needed info on ads, privacy settings, locations, and purchases. Best of all, they’re constantly running customer surveys and experiments of all kinds. It would be straightforward for any of these companies to run yet another user satisfaction survey, to prove what should be an obvious, measurable effect. I’m really looking for any kind of research here, whether it’s a credit card company running a SQL query on existing data to point out that customers with iOS app tracking turned off have more chargebacks, or a longer-term customer satisfaction study, anything.

looking at the data we do have

Here’s the closest research to this subject that I could find: The Welfare Effects of Ad Blocking by Lin et al.

[P]articipants that were asked to install an ad-blocker become less likely to regret recent purchases, while participants that were asked to uninstall their ad-blocker report lower levels of satisfaction with their recent purchases.

The ad blockers used in that study, however, were multi-purpose ones such as uBlock Origin that block ads in general, not just personalization. And the users in the study would have been exposed to personalized ads on social media where an ad blocker doesn’t work, which according to the FTC is more likely to include deceptive ads. So the impact of privacy tools and settings is still hard to figure out. A full-service blocker like uBlock Origin will block some but not all cross-context tracking, so even though users will still see ads on Facebook, those ads will be less well matched to them. The problem, though, is that Facebook is both the most accurately personalized ad medium and loaded with outright scams, and the effect of privacy settings on scams goes two ways: you can avoid being specifically targeted for a scam, but more likely you can also just get more scam ads by default if you feed in too little info to be targeted for the good ads. (But I’m pretty sure it would be too much to ask to get Facebook to turn off the scam ads for a few weeks, for Science.)

Another related paper is Behavioral advertising and consumer welfare: An empirical investigation.

The presence of low quality vendors, along with the recent increase in the use of ad blockers, makes it increasingly difficult for new, high quality vendors, to reach new clients. Consumers benefit from having access to new sellers that are able to meet their needs through behavioral ads, as long as they are good sellers.

but

targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products, compared to competing alternatives found in organic search results

There seems to be a gap between theory and practice here. On the one hand, the Econ 101 version of an honest seller should be able to use ad personalization to match their product to a customer who would benefit from it. On the other hand, dishonest sellers seem to be winning the personalized ad game in the real world.

If you look back on the history of advertising, there has never been an ad medium that required so much legal and technical complexity to try to get people to accept it. Why is Meta going to so much trouble to try to come up with a legal way to require people in the EU to accept personalized ads? If ad personalization is so good for consumers, won’t they pick it on their own? Anyway, I’m looking for research on how personalization and privacy choices affect customer satisfaction.

Related

free riding on future web ads?

Reputation, signaling, and targeted ads

B L O C K in the U S A

banning surveillance advertising

privacy economics sources

When can deceptive sellers outbid honest sellers for ad impressions?

Adrian GaudebertThe challenges of teaching a complex game

When I was 13, my mom bought me Civilization III from a retail shop, then went on to do some more shopping. I stayed in the car, with this elegant box in my hands, craving to play the game it contained. I opened the box, and there discovered something magical: the Civilization III Manual. Having nothing better to do, I started reading it…

The Civilization III manual <figcaption>That game manual was THICK.</figcaption>

More than 20 years later, I still remember how great reading that book felt. I was propelled into the game, learning about its systems and strategies, discovering screens of foggy maps and world wonders. It made me love the game before I had even played it! Since then I've played all Civilization games that came out — including Humankind, the unofficial 7th episode — and loved all of them. Would I have had the same connection to these games had I not read the manual? Impossible to tell. Would I have read that book had I not been trapped in a car with the game box on my laps? Definitely not! Even the developers of the game knew that nobody was reading those texts:

A quote from the Civilization III manual <figcaption>“The authors and developers of computer games know too well that most players never read the manual.”</figcaption>

Here's me now, 20-something years later, having made a game of my own and needing to teach it to potential players… Should I write a full-blown game manual, hoping that a little 13-years old will read it on a parking lot?

Heck no! Ain't nobody got time for that!

Let's make a tutorial instead

Dawnmaker has been built almost like a board game, in the sense that it has complex rules that you have to learn before you can play. Physical board game players are used to that: someone has to go through the rules before they can explain them to the rest of their players group. But video games are a different beast, and we've long moved away from reading… well, almost anything at all, really, and certainly not rules. You can't put each player into a car on a parking lot with nothing else to do other than reading the rules of your game. If you were to present the video game player with a rules book, in today's world of abundance, they would just move on to the next game in their unending backlog.

Teaching a game is thus incredibly difficult: it has to have as little text as possible, it has to be fun and rewarding, and it has to hook the player so that, by the end of the teaching phase, they still want to play the actual game.

It's with all those things in mind that I started building Dawnmaker's tutorial. I set two main rules in place: first, use as little words as possible, and second, make the player learn while doing. The first iteration of the tutorial was very terse: you only had a small goal written at the top of the screen, and almost no explanations whatsoever about what you were to do, or why. It turns out, that didn't work too well. Players were lost, especially when it came to the most complex actions or features of the game. Past a certain point in the tutorial, almost all of the players stopped reading the objectives at the top of the screen. And finally, they were also lacking a sense of purpose.

So for all my good intents, I had to revise my approach and write more words. The second iteration, which is now live in the game and demo, has a lot of small tooltips that pop up around the screen as the interface shows itself. I've tried to load information as slowly as possible, giving the player only what they need at a given moment. I think I approximately quadrupled the number of words in the tutorial, but such is the reality of teaching a complex game.

The other big change I made was to give the player a better sense of progression in the tutorial. The objectives now stay visible in a box on the left-hand side of the screen. They have little animations and sounds that reward the player when they complete a task. Seeing that list grow shows how the player has progressed and is also rewarding by itself.

Teaching the game doesn't only happen in the tutorial though, but also on the various signs and feedback we put around the game. Here's an example: during the tutorial, new players did not understand what was happening with the new building choice that was presented. The solution to this was not to explain with words what those buildings where, but to show a feedback. Now, whenever you gain a new building, you see that same building popping up in the center of the board, then moving towards the buildings roster. It's a double win: they understand that the building goes somewhere, they see where, and they are inclined to check that place and see what it is. I guess one feedback is worth a thousand words?

This version of the tutorial is still far from perfect. But it is the first thing players interact with, and thus it is a piece of the game that really has to shine. We'll keep collecting feedback from new players, and use that to polish the tutorial until, like Eclairium, it shines bright.

BTW: unlike Eclairium, diamonds do not shine, they simply reflect light. Rihanna has been lying to us all.

Next event: Geektouch in Lyon

If you're in Lyon or close to it, come and meet us at the Geektouch / Japan Touch festival in Eurexpo on May 4th and 5th! We'll have a stand on the Indie Game Lab space (lot A87). You will of course get to play with the latest version of Dawnmaker. We hope to see you there!


This piece was initially sent out to the readers of our newsletter. Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive regular stories about how we're making this game, the latest news of its development, as well as an exclusive access to Dawnmaker's alpha version!

Join our community!

The Mozilla BlogAbbie Richards on the wild world of conspiracy theories and battling misinformation on the internet

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Abbie Richards, a former stand-up comedian turned content creator dominating TikTok as a researcher, focusing on understanding how misinformation, conspiracy theories and extremism spread on the platform. She also is a co-founder of EcoTok, an environmental TikTok collective specializing in social media climate solutions. We talked with Abbie about finding emotional connections with audiences, the responsibility of social media platforms and more.

First off, what’s the wildest conspiracy theory that you have seen online?

It’s hard to pick the wildest because I don’t know how to even begin to measure that. One that I think about a lot, though, is that I tend to really find the spirituality ones very interesting. There’s the new Earth one with people who think that the earth is going to be ascending into a higher dimension. And the way that that links to climate change — like when heat waves happen, and when the temperature is hotter than normal, and they’re like “it’s because the sun’s frequency is increasing because we’re going to ascend into a higher dimension.” And I am kind of obsessed with that line of thought. Also because they think that if you, your soul, vibrate at a high enough frequency — essentially, if your vibes are good enough — you will ascend, and if not, you will stay trapped here in dystopian earth post ascension which is wild because then you’re assigning some random, universal, numerical system for how good you are based on your vibrational frequency. Where is the cut off? At what point of vibrating am I officially good enough to ascend, or am I going to always vibrate too low? Are my vibes not good? And do I not bring good vibes to go to your paradise? I think about that one a lot.

As someone who has driven through tons of misinformation and conspiracy theories all the time, what do you think are the most common things that people should be able to notice when they need to be able to identify if something’s fake? 

So I have two answers to this. The first is that the biggest thing that people should know when they’re encountering this information and conspiracy theories online is that they need to check in with how a certain piece of information makes them feel. And if it’s a certain piece of information that they really, really want to believe, they should be especially skeptical, because that’s the number one thing. Not whether they can recognize something like that or if AI-generated human ears are janky. It’s the fact that they want to believe what the AI generated deepfake is saying and no matter how many tricks we can tell them about symmetry and about looking for clues that it is a deepfake, fundamentally, if they want to believe it, the thing will still stick in their brain. And they need to learn more about the emotional side of encountering this misinformation and conspiracy theories online. I would prioritize that over the tiny little tricks and tips for how to spot it, because really, it’s an emotional problem. When people lean into and dive into conspiracy theories, and they fall down a rabbit hole, it’s not because they’re not media literate enough. Fundamentally, it’s because it’s emotionally serving something for them. It’s meeting some sort of emotional psychological epistemic need to feel like they have control, to feel like they have certainty to feel like they understand things that other people don’t, and they’re in on knowledge to feel like they have a sense of community, right? Conspiracy theories create senses of community and make people feel like they’re part of a group. There are so many things that it’s providing that no amount of tips and tricks for spotting deepfakes will ever address. And we need to be addressing those. How can we help them feel in control? How can we help them feel empowered so that they don’t fall into this?

The second to me is wanting to make sure that we’re putting the onus on the platforms rather than the people to decipher what is real and not real because people are going to consistently be bad at that, myself included. We all are quite bad at determining what’s real. I mean, we’re encountering more information in a day than our brains can even remotely keep up with. It’s really hard for us to decipher which things are true and not true. Our brains aren’t built for that. And while media literacy is great, there’s a much deeper emotional literacy that needs to come along with it, and also a shifting of that onus from the consumer onto the platforms.

<figcaption class="wp-element-caption">Abbie Richards at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

What are some of the ways these platforms could take more responsibility and combat misinformation on their platforms?

It’s hard. I’m not working within the platforms, so it’s hard to know what sort of infrastructure they have versus what they could have. It’s easy to look at what they’re doing and say that it’s not enough because I don’t know about their systems. It’s hard to make specific recommendations like “here’s what you should be doing to set up a more effective …”. What I can say is that, without a doubt, these mega corporations that are worth billions of dollars certainly have the resources to be investing in better moderation and figuring out ways to experiment with different ways. Try different things to see what works and encourage healthier content on your platform. Fundamentally, that’s the big shift. I can yell about content moderation all day, and I will, but the incentives on the platforms are not to create high quality, accurate information. The incentives on all of these platforms are entirely driven by profit, and how long they can keep you watching, and how many ads they can push to you, which means that the content that will thrive is the stuff that is the most engaging, which tends to be less accurate. It tends to be catering to your negative emotions, catering to things like outrage and that sort of content that is low quality, easy to produce, inaccurate, highly emotive content is what is set up to thrive on the platform. This is not a system that is functional with a couple of flaws, this misinformation crisis that we’re in is very much the results of the system functioning exactly as it’s intended.

What do you think is the biggest challenge we face in the world this year on and offline? 

It is going to be the biggest election year in history. We just have so many elections all around the world, and platforms that we know don’t serve healthy, functional democracy super well, and I am concerned about that combination of things this year.

What do you think is one action that everybody can take to make the world, and our online lives, a little bit better?

I mean, log off (laughs). Sometimes log off. Go sit in silence just for a bit. Don’t say anything, don’t hear anything. Just go sit in silence. I swear to God it’ll change your life. I think we are in a state right now where we are chronically consuming so much information, like we are addicted to information, and just drinking it up, and I am begging people to at least just like an hour a week to not consume anything, and just see how that feels. If we could all just step back for a little bit and log off and rebel a little bit against having our minds commodified for these platforms to just sell ads, I really feel like that is one of the easiest things that people can do to take care of themselves.

The other thing would be check in with your emotions. I can’t stress this enough. Like when you encounter information, how does that information make you feel? How much do you want to believe that information and those things. So very much, my advice is to slow down and feel your feelings.

We started Rise25 to celebrate Mozilla’s 25th anniversary, what do you hope people are celebrating in the next 25 years?

I hope that we’ve created a nice socialist internet utopia where we have platforms that people can go interact and build community and create culture and share information and share stories in a way that isn’t driven entirely by what’s the most profitable. I’d like to be celebrating something where we’ve created the opposite of a clickbait economy where everybody takes breaks. I hope that that’s where we are at in 25 years.

What gives you hope about the future of our world?

I interact with so many brilliant people who care so much and are doing such cool work because they care, and they want to make the world better, and that gives me a lot of hope. In general. I think that approaching all of these issues from an emotional lens and understanding that, people in general just want to feel safe and secure, and they just want to feel like they know what’s coming around the corner for them, and they can have their peaceful lives, is a much more hopeful way to think about pretty scary kind of political divides. I think that there is genuinely a lot more that we have in common than there are things that we have differences. It’s just that right now, those differences feel very loud. There are so many great people doing such good work with so many different perspectives, and combined, we are so smart together. On top of that, people just want to feel safe and secure. And if we can figure out a way to help people feel safe and secure and help them feel like their needs are being met, we could create a much healthier society collectively.

Get Firefox

Get the browser that protects what’s important

The post Abbie Richards on the wild world of conspiracy theories and battling misinformation on the internet appeared first on The Mozilla Blog.

Wil ClouserI made a new hack poster

I was feeling nostalgic a couple months ago and built a hack poster out of plywood. It’s mostly modeled after the original but I added the radio tower and changed the words. “This technology could fall into the right hands” still makes me smile when I see it out in the world.

Poster hanging on the wall Close-up of radio tower Close-up of lettering

Mozilla Addons Blog1000+ Firefox for Android extensions now available

The new open ecosystem of extensions on Firefox for Android launched in December with just over 400 extensions. Less than five months later we’ve surpassed 1,000 Firefox for Android extensions. That’s an impressive achievement by this developer community! It’s exciting to see so many developers embrace the opportunity to explore new creative possibilities for mobile browser customization.

If you’re a developer intrigued to learn more about building extensions on Firefox for Android, here’s a great place to get started. Or maybe you already have some feedback about missing API’s on Firefox for Android?

What are some of your favorite new Firefox for Android extensions? Drop some props in the comments below.

The post 1000+ Firefox for Android extensions now available appeared first on Mozilla Add-ons Community Blog.

Mozilla Localization (L10N)L10n report: May 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New content and projects

What’s new or coming up in Firefox desktop

To start, a “logistic” announcement: on April 29 we changed the configuration of the Firefox project in Pontoon to use a different repository for source (English) strings. This is part of a larger change that will move Firefox development from Mercurial to Git.

While the change was mostly transparent for localizers, there is an added benefit: as part of the Firefox project, you will now be able to localize about 40 strings that are used by GeckoView, the core of our Android browsers (Firefox, Focus). For your convenience, these are grouped in a specific tag called GeckoView. Since these are mostly old strings dating back to Fennec (Firefox for Android up to version 68), you will also find that existing translations have been imported — in fact, we imported over 4 thousand translations.

Going back to Firefox desktop, version 127 is currently in Nightly, and will move to Beta on May 13. Over the past few weeks there have been a few new features and updates that’s it’s worth testing to ensure the best experience for users.

You are probably aware of the Firefox Translations feature available for a growing number of languages. While this feature was originally available for full-page translation, now it’s also possible to select text in the page and translate it through the context menu.

Screenshot of the translation selection feature in Firefox.

Screenshot of the Translation selection feature in Firefox.

Reader Mode is also in the process of getting a redesign, with more controls to customize the user experience.

Screenshot of the Reader Mode settings in Firefox Nightly.

Screenshot of the Reader Mode settings in Firefox Nightly.

The New Tab page has a new wallpaper function: in order to test it, go to about:config (see this page if you’re unfamiliar), search for browser.newtabpage.activity-stream.newtabWallpapers.enabled and flip its value to true (double-click will work). At this point, open a new tab and click the gear icon in the top-right corner. Note that the available wallpapers change depending on the current theme (dark vs light).

Screenshot of New Tab wallpaper selection in Nightly.

Screenshot of New Tab wallpaper selection in Nightly.

Last but not least, make sure to test the new features available in the integrated PDF Reader, in particular the dialog to add images and highlight elements in the page.

Screenshot of the PDF Viewer in Firefox, with the "Add image" UI.

Screenshot of the PDF Viewer in Firefox, with the “Add image” UI.

What’s new or coming up in mobile

The mobile team is currently redesigning the app menus in Firefox Android and iOS. There will be many new menu strings landing in the upcoming versions (you may have already noticed some prelanding), including some dynamic menu text that may get truncated for some locales – especially on smaller screens.

Testing for this type of localization issues will be a focus: we’ll set expectations for it soon and send testing instructions (v130 or v131 releases are currently the target). Strings will be making their way incrementally in the new menus available through Firefox Nightly, allowing enough time for localizers to translate and test continuously.

What’s new or coming up in web projects

Mozilla.org

The mozilla.org team is creating a regular cleanup routine by labeling the soon-to-be replaced strings with an expiration date, usually two months after the string has become obsolete. This approach will minimize communities’ time localizing strings no longer used. In other words, if you see a string labeled with a date, please skip it. Below is an example, and in this case, you want to localize the v2 string:

example-v2 = Security, reliability and speed — on every device, anywhere you go.

# Obsolete string (expires: 2024-03-18)
example = Security, reliability and speed — from a name you can trust.

Relay Website

This product is in maintenance mode and it will not be open for new locales until we remove obsolete strings and revert the content migration to mozilla.org (see also l10n report from November 2023).

What’s new or coming up in SUMO

  • Konstantina is joining the SUMO force! She moved from the Marketing team to the Customer Experience team in late Q1. If you haven’t get to know her, please don’t hesitate to say hi!
  • AI spam has been a big issue in our forum lately, so we decided to spin up a new contributor policy around the use of AI-generated tools. Please check this thread if you haven’t!
  • We opened an AAQ for NL in our support forum. Thanks to Tim Maks and the rest of the NL community, who’ve been very supportive of this work.
  • Are you contributing to our Knowledge Base? You may want to read the recent blog posts from the content team to get to know more about what they’re up to. In short, they’re doing a lot around freshening up our knowledge base articles.
  • Wanna know more about what we’ve done in Q1 2024, read the recap here.

What’s new or coming up in Pontoon

Large Language Model (LLM) Integration

We’re thrilled to announce the integration of LLM-assisted translations into Pontoon! For all locales utilizing Google Translate as a translation source, a new AI-powered option is now available within the ‘Machinery’ tab. This feature enhances Google Translate outputs by leveraging a Large Language Model (LLM). Users can now tailor translations to be more formal or informal and rephrase text for clarity and tone.

Since January, our team has conducted extensive research to explore how other localization services are utilizing AI. We specifically focused on comparing the capabilities of Large Language Models (LLMs) against traditional machine translation methods and identifying industry best practices.

Our findings revealed that while tools like Google Translate provide a solid foundation, they sometimes fall short, often translating text too literally. Recognizing the potential for improvement, we introduced functionality within Pontoon to adjust the tone and refine phrases directly.

For example, consider the phrase “Firefox has your back” translated in the Italian locale. The suggestion provided by Google’s machine translation is literal and incorrect (“Firefox covers your shoulders”). The images below demonstrate the use of the “Rephrase” option:

Screenshot of the LLM feature in Pontoon (before selecting a command).

Dropdown to use the LLM feature

Screenshot of the LLM feature in Pontoon (after selecting the rephrase command).

Enhanced translation output from the LLM rephrasing the initial Google Translate result.

Furthering our community engagement, on April 29th, we hosted a Localization Fireside Chat. During this session, we discussed the new feature in depth and provided a live demonstration. Catch the highlights of our discussion at the following recordings (the LLM feature is discussed at the 7:22 mark):

Performance improvements

At the end of the last year we’ve asked Mozilla localizers what areas of Pontoon would they like to see improved. Performance optimizations were one of the top-voted requests and we’re happy to report we’ve landed several speedups since the beginning of the year.

Most notable improvements were made to the dashboards, with Contributors, Insights and Tags pages now loading in a fraction of the time they took to load earlier in the year. We’ve also improved the loading times of Permissions tab, Notifications page and some filters.

As shown in the chart below, almost all the pages and actions will now take less time to load.

Chart showing the apdex score of several views in Pontoon.

Chart showing the improved apdex score of several views in Pontoon.

Events

Watch our latest localization virtual events here.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

The Rust Programming Language BlogAnnouncing Rust 1.78.0

The Rust team is happy to announce a new version of Rust, 1.78.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.78.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.78.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.78.0 stable

Diagnostic attributes

Rust now supports a #[diagnostic] attribute namespace to influence compiler error messages. These are treated as hints which the compiler is not required to use, and it is also not an error to provide a diagnostic that the compiler doesn't recognize. This flexibility allows source code to provide diagnostics even when they're not supported by all compilers, whether those are different versions or entirely different implementations.

With this namespace comes the first supported attribute, #[diagnostic::on_unimplemented], which can be placed on a trait to customize the message when that trait is required but hasn't been implemented on a type. Consider the example given in the stabilization pull request:

#[diagnostic::on_unimplemented(
    message = "My Message for `ImportantTrait<{A}>` is not implemented for `{Self}`",
    label = "My Label",
    note = "Note 1",
    note = "Note 2"
)]
trait ImportantTrait<A> {}

fn use_my_trait(_: impl ImportantTrait<i32>) {}

fn main() {
    use_my_trait(String::new());
}

Previously, the compiler would give a builtin error like this:

error[E0277]: the trait bound `String: ImportantTrait<i32>` is not satisfied
  --> src/main.rs:12:18
   |
12 |     use_my_trait(String::new());
   |     ------------ ^^^^^^^^^^^^^ the trait `ImportantTrait<i32>` is not implemented for `String`
   |     |
   |     required by a bound introduced by this call
   |

With #[diagnostic::on_unimplemented], its custom message fills the primary error line, and its custom label is placed on the source output. The original label is still written as help output, and any custom notes are written as well. (These exact details are subject to change.)

error[E0277]: My Message for `ImportantTrait<i32>` is not implemented for `String`
  --> src/main.rs:12:18
   |
12 |     use_my_trait(String::new());
   |     ------------ ^^^^^^^^^^^^^ My Label
   |     |
   |     required by a bound introduced by this call
   |
   = help: the trait `ImportantTrait<i32>` is not implemented for `String`
   = note: Note 1
   = note: Note 2

For trait authors, this kind of diagnostic is more useful if you can provide a better hint than just talking about the missing implementation itself. For example, this is an abridged sample from the standard library:

#[diagnostic::on_unimplemented(
    message = "the size for values of type `{Self}` cannot be known at compilation time",
    label = "doesn't have a size known at compile-time"
)]
pub trait Sized {}

For more information, see the reference section on the diagnostic tool attribute namespace.

Asserting unsafe preconditions

The Rust standard library has a number of assertions for the preconditions of unsafe functions, but historically they have only been enabled in #[cfg(debug_assertions)] builds of the standard library to avoid affecting release performance. However, since the standard library is usually compiled and distributed in release mode, most Rust developers weren't ever executing these checks at all.

Now, the condition for these assertions is delayed until code generation, so they will be checked depending on the user's own setting for debug assertions -- enabled by default in debug and test builds. This change helps users catch undefined behavior in their code, though the details of how much is checked are generally not stable.

For example, slice::from_raw_parts requires an aligned non-null pointer. The following use of a purposely-misaligned pointer has undefined behavior, and while if you were unlucky it may have appeared to "work" in the past, the debug assertion can now catch it:

fn main() {
    let slice: &[u8] = &[1, 2, 3, 4, 5];
    let ptr = slice.as_ptr();

    // Create an offset from `ptr` that will always be one off from `u16`'s correct alignment
    let i = usize::from(ptr as usize & 1 == 0);
    
    let slice16: &[u16] = unsafe { std::slice::from_raw_parts(ptr.add(i).cast::<u16>(), 2) };
    dbg!(slice16);
}
thread 'main' panicked at library/core/src/panicking.rs:220:5:
unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed `isize::MAX`
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread caused non-unwinding panic. aborting.

Deterministic realignment

The standard library has a few functions that change the alignment of pointers and slices, but they previously had caveats that made them difficult to rely on in practice, if you followed their documentation precisely. Those caveats primarily existed as a hedge against const evaluation, but they're only stable for non-const use anyway. They are now promised to have consistent runtime behavior according to their actual inputs.

  • pointer::align_offset computes the offset needed to change a pointer to the given alignment. It returns usize::MAX if that is not possible, but it was previously permitted to always return usize::MAX, and now that behavior is removed.

  • slice::align_to and slice::align_to_mut both transmute slices to an aligned middle slice and the remaining unaligned head and tail slices. These methods now promise to return the largest possible middle part, rather than allowing the implementation to return something less optimal like returning everything as the head slice.

Stabilized APIs

These APIs are now stable in const contexts:

Compatibility notes

  • As previously announced, Rust 1.78 has increased its minimum requirement to Windows 10 for the following targets:
    • x86_64-pc-windows-msvc
    • i686-pc-windows-msvc
    • x86_64-pc-windows-gnu
    • i686-pc-windows-gnu
    • x86_64-pc-windows-gnullvm
    • i686-pc-windows-gnullvm
  • Rust 1.78 has upgraded its bundled LLVM to version 18, completing the announced u128/i128 ABI change for x86-32 and x86-64 targets. Distributors that use their own LLVM older than 18 may still face the calling convention bugs mentioned in that post.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.78.0

Many people came together to create Rust 1.78.0. We couldn't have done it without all of you. Thanks!

Dave TownsendTests don't replace Code Review

Featured image of post Tests don't replace Code Review

I frequently see a bold claim come up in tech circles. That as a team you’re wasting time by doing code reviews. You should instead rely on automated tests to catch bugs. This surprises me because I can’t imagine anyone thinking that such a blanket statement is true. But then most of the time this is brought up in places like Twitter where nuance is impossible and engagement farming is rife. Still it got me thinking about why I think code review is important even if you have amazing tests.

Before I elaborate I’ll point out what should be obvious. Different projects have different needs. You shouldn’t listen to me tell you that you must do code review any more than you should listen to anyone else tell you that you must not do code review. Be pragmatic in all things. Beware one-size-fits-all statements (in almost any context).

We’ve been religiously performing code review on every (well almost every) patch at Mozilla since well before I joined the project which was quite some time ago. And in that time I’ve seen Firefox go from having practically no automated tests to a set of automated test suites that if run end to end on a single machine (which is impossible but let’s ignore that) would take nearly two months (😱) to complete. And in that time I don’t think I’ve ever heard anyone suggest we should stop doing code review for anything that actually ships to users (we do allow documentation changes with no review). Why?

# A good set of automated tests doesn’t just magically appear

Let’s start with the obvious.

Someone has to have written all of those tests. And others have to have verified all that. And even if your test suite is already perfect, how do you know that the developer building a new feature has also included the tests necessary to verify that feature going forwards?

There are some helpful tools that exist, like code coverage. But these are more informative than indicative. Useful to track but should rarely be used by themselves.

# Garbage unmaintainable code passes tests

There are usually many ways to fix a bug or implement a feature. Some of those will be clear readable code with appropriate comments that a random developer in three years time can look at and understand quickly. Others will be spaghetti code that is to all intents and purposes obfuscated. Got a bug in there? It may take ten times longer to fix it. Lint rules can help with this to some extent, but a human code reviewer is going to spot unreadable code a mile away.

# You cannot test everything

It’s often not feasible to test for every possible case. Anywhere your code interacts with anything outside of itself, like a filesystem or a network, is going to have cases that are really hard to simulate. What if memory runs out at a critical moment? What if the OS suddenly decides that the disk is not writable? These are cases we have to handle all the time in Firefox. You could say we should build abstractions around everything so that tests can simulate all those cases. But abstractions are not cheap and performance is pretty critical for us.

# But I’m a 100x developer, none of this applies to me

I don’t care how senior a developer you are, you’ll make mistakes. I sure do. Now it’s true that there is something to be said for adjusting your review approach based on the developer who wrote the code. If I’m reviewing a patch by a junior developer I’m going to go over the patch with a fine tooth-comb and then when they re-submit I’m going to take care to make sure they addressed all my changes. Less so with a senior developer who I know knows the code at hand.

# So do tests help with code review at all?

Absolutely!

Tests are there to automatically spot problems, ideally before a change even reaches the review stage. Code review is there to fill in the gaps. You can mostly skip over worrying about whether this breaks well tested functionality (just don’t assume all functionality is well tested!). Instead you can focus on what the change is doing that cannot be tested:

  • Is it actually fixing the problem at hand?
  • Does it include appropriate changes to the automated tests?
  • Is the code maintainable?
  • Is the approach going to cause problems for other changes down the road?
  • Could there be performance issues?

Code review and automated tests are complimentary. I believe you’ll get the best result when you employ both sensibly. Assuming you have the resources to do so of course. I don’t think large projects can do without both.

Mozilla ThunderbirdThunderbird Monthly Development Digest: April 2024

Graphic with text "Thunderbird Development Digest April 2024," featuring abstract ASCII art on a dark Thunderbird logo background.

Hello Thunderbird Community, and welcome back to the monthly Thunderbird development digest. April just ended and we’re running at full speed into May. We’re only a couple of months away from the next ESR, so things are landing faster and we’re seeing the finalization of a lot of parallel efforts.

20-Year-Old bugs

Something that has been requested for almost 20 years finally landed on Daily. The ability to control the display of recipients in the message list and better distinguish unknown addresses from those saved in the Address Book was finally implemented in Bug 243258 – Show email address in message list.

This is one of the many examples of features that in the past were very complicated and tricky to implement, but that we were finally able to address thanks to the improvements of our architecture and being able to work with a more flexible and modular code.

We’re aiming at going through those very very old requests and slowly addressing them when possible.

Exchange alpha

More Exchange support improvements and features are landing on Daily almost…daily (pun intended). If you want to test things with a local build, you can follow this overview from Ikey.

We will soon look at the possibility of enabling Rust builds by default, making sure that all users will be able to consume our Rust code from next beta, and only needing to switch a pref in order to test Exchange.

Folder compaction

If you’ve been tracking our most recent struggles, you’re probably aware of one of the lingering annoying issues which sees the bubbling up of the size of the user profile caused by local folder corruption.

Ben dive bombed into the code and found a spaghetti mess that was hard to untangle. You can read more about his exploration and discoveries in his recent post on TB-Planning.

We’re aiming to land this code hopefully before the end of the week and start calling for some testing and feedback from the community to ensure that all the various issues have been addressed correctly.

You can follow the progress in Bug 1890448 – Rewrite folder compaction.

Cards View

If you’re running Beta or Daily, you might have noticed some very fancy new UI for the Cards View. This has been a culmination of many weeks of UX analysis to ensure a flexible and consistent hover, selection, and focus state.

Micah and Sol identified a total of 27 different interaction states on that list, and implementing visual consistency while guaranteeing optimal accessibility levels for all operating systems and potential custom themes was not easy.

We’re very curious to hear your feedback.

Context menu

A more refined and updated context menu for the message list also landed on Daily.

A very detailed UX exploration and overview of the implementation was shared on the UX Mailing list a while ago.

This update is only the first step of many more to come, so we apologize in advance if some things are not super polished or things seem temporarily off.

ESR Preview

If you’re curious about what the next ESR will look like or checking new features, please consider downloading and installing Beta (preferably in another directory to not override your current profile.) Help us test this new upcoming release and find bugs early.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next month.

Alessandro Castellani (he, him)
Director, Desktop and Mobile Apps

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: April 2024 appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 545

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is efs, a no-std ext2 filesystem implementation with plans to add other file systems in the future.

Another week completely devoid of suggestions, but llogiq stays hopeful he won't have to dig for next week's crate all by himself.

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

  • No Calls for papers or presentations were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

409 pull requests were merged in the last week

Rust Compiler Performance Triage

Several non-noise changes this week, with both improvements and regresions coming as a result. Overall compiler performance is roughly neutral across the week.

Triage done by @simulacrum. Revision range: a77f76e2..c65b2dc9

2 Regressions, 2 Improvements, 3 Mixed; 1 of them in rollups 51 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-05-01 - 2024-05-29 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

"I'll never!" "No, never is in the 2024 Edition." "But never can't be this year, it's never!" "Well we're trying to make it happen now!" "But never isn't now?" "I mean technically, now never is the unit." "But how do you have an entire unit if it never happens?"

Jubilee on Zulip

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

William DurandMoziversary #6

Today is my sixth Moziversary 🎂 I joined Mozilla as a full-time employee on May 1st, 2018. I previously blogged in 2019, 2020, 2021, 2022, and 2023.

Last year, I mainly contributed to Firefox for Android as the lead engineer on a project called “Add-ons General Availability (GA)”. The goal was to allow for more add-ons on this platform. Success! More than a thousand extensions are now available on Android 🎉

In addition, I worked on a Firefox feature called Quarantined Domains and implemented a new abuse report form on addons.mozilla.org (AMO) to comply with the Digital Services Act (DSA). I was also involved in two other cross-team efforts related to the Firefox installation funnel. I investigated various issues (e.g. this openSUSE bug), and I coordinated the deprecation of weak add-on signatures and some more changes around certificates lately, which is why I wrote xpidump.

Phew! There is no shortage of work.

When I moved to the WebExtensions team in 2022, I wrote about this incredible challenge. I echoed this sentiment several months later in my 2022 Moziversary update. I couldn’t imagine how much I would achieve in two years…

Back then, I didn’t know what the next step in my career would be. I have been aiming to bridge the gap between the AMO and WebExtensions engineering teams since at least 2021 and that is my “next step”.

I recently took a new role as Add-ons Tech Lead. This is the continuation of what I’ve been doing for some time but that comes with new challenges and opportunities as well. We’ll see how it goes but I am excited!

I’ll be forever grateful to my manager and coworkers. Thank you ❤️

Don Martiblog fix: remove stray files

Another update from the blog. Quick recap: I’m re-doing this blog with mostly Pandoc and make, with a few helper scripts.

This is a personal web site and can be broken sometimes, and one of the breakage problems was: oops, I removed a draft post from the directory of source files (in CommonMark) but the HTML version got built and put in public and copied to the server, possibly also affecting the index.html and the RSS feed.

If you’re reading the RSS and got some half-baked drafts, that’s why.

So, to fix it, I need to ask make if there’s anything in the public directory that doesn’t have a corresponding source file or files and remove it. Quick helper script:

That should mean a better RSS reading experience since you shouldn’t get it cluttered up with drafts if I make a mistake.

But I’m sure I have plenty of other mistakes I can make.

Related

planning for SCALE 2025

Automatically run make when a file changes

Hey kids, favicon!

responsive ascii art

Bonus links

We can have a different web Nothing about the web has changed that prevents us from going back. If anything, it’s become a lot easier.

A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed (Section 230 protection for a research extension? Makes sense to me.)

Effects of Banning Targeted Advertising (The top 10 percent of Android apps for kids did better after an ad personalization policy change, while the bottom 90 percent lost revenue. If Sturgeon’s Law applies to Android apps, the average under-13 user might be better off?)

In Response To Google (Does anyone else notice more and more people working on ways to fix their personal information environment because of the search quality crisis? This blog series from Ed Zitron has some good background.)

The Rust Programming Language BlogAnnouncing Google Summer of Code 2024 selected projects

The Rust Project is participating in Google Summer of Code (GSoC) 2024, a global program organized by Google which is designed to bring new contributors to the world of open-source.

In February, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We were pleasantly surprised by the amount of people that wanted to participate in these projects and that led to many fruitful discussions with members of various Rust teams. Some of them even immediately began contributing to various repositories of the Rust Project, even before GSoC officially started!

After the initial discussions, GSoC applicants prepared and submitted their project proposals. We received 65 (!) proposals in total. We are happy to see that there was so much interest, given that this is the first time the Rust Project is participating in GSoC.

A team of mentors primarily composed of Rust Project contributors then thoroughly examined the submitted proposals. GSoC required us to produce a ranked list of the best proposals, which was a challenging task in itself since Rust is a big project with many priorities! We went through many rounds of discussions and had to consider many factors, such as prior conversations with the given applicant, the quality and scope of their proposal, the importance of the proposed project for the Rust Project and its wider community, but also the availability of mentors, who are often volunteers and thus have limited time available for mentoring.

In many cases, we had multiple proposals that aimed to accomplish the same goal. Therefore, we had to pick only one per project topic despite receiving several high-quality proposals from people we'd love to work with. We also often had to choose between great proposals targeting different work within the same Rust component to avoid overloading a single mentor with multiple projects.

In the end, we narrowed the list down to twelve best proposals, which we felt was the maximum amount that we could realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of these twelve proposals would be accepted into GSoC.

Selected projects

On the 1st of May, Google has announced the accepted projects. We are happy to announce that 9 proposals out of the twelve that we have submitted were accepted by Google, and will thus participate in Google Summer of Code 2024! Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s):

Congratulations to all applicants whose project was selected! The mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects.

We would also like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited review capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still actual, and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project maintainers and the Rust ecosystem.

Assuming our involvement in GSoC 2024 is successful, there's a good chance we'll participate next year as well (though we can't promise anything yet) and we hope to receive your proposals again in the future! We also are planning to participate in similar programs in the very near future. Those announcements will come in separate blog posts, so make sure to subscribe to this blog so that you don't miss anything.

The accepted GSoC projects will run for several months. After GSoC 2024 finishes (in autumn of 2024), we plan to publish a blog post in which we will summarize the outcome of the accepted projects.

Support.Mozilla.OrgWhat’s up with SUMO — Q1 2024

Hi everybody,

It’s always exciting to start a new year as it provides renewed spirit. Even more exciting because the CX team welcomed a few additional members this quarter, including Konstantina, who will be with us crafting better community experiences in SUMO. This is huge, since the SUMO community team has been under resourced for the past few years. I’m personally super excited about this. There are a few things that we’re working on internally, and I can’t wait to share them with you all. But first thing first, let’s read the recap of what happened and what we did in Q1 2024!

Welcome note and shout-outs

  • Thanks for joining the Social and Mobile Store Support program!
  • Welcome back to Erik L and Noah. It’s good to see you more often these days.
  • Shout-outs to Noah and Sören for their observations during the 125 release so we can take prompt actions on bug1892521 and bug1892612. Also, special thanks to Paul W for his direct involvement in the war room for the NordVPN incident.
  • Thanks to Philipp for his consistency in creating desktop thread in the contributor forum for every release. Your help is greatly appreciated!
  • Also huge thanks to everybody who is involved in the Night Mode removal issue on Firefox for iOS 124. In the end, we decided to end the experiment early, since many people raised concern about accessibility issues. This really shows the power of community and users’ feedback.

If you know someone who you’d like to feature here, please contact Kiki, and we’ll make sure to add them in our next edition.

Community news

  • As I mentioned, we started the year by onboarding Mandy, Donna, and Britney. If that’s not enough, we also welcomed Konstantina, who moved from Marketing to the CX team in March. If you haven’t got to know them, please don’t hesitate to say hi when you can.
  • AI spam has been a big issue in our forum lately, so we decided to spin up a new contributor policy around the use of AI-generated tools. Please check this thread if you haven’t!
  • We participated in FOSDEM 2024 in Brussels and it was a blast! It’s great to be able to meet face to face with many community members after a long hiatus since the pandemic. Kiki and the platform team also presented a talk in the Mozilla devroom. We also shared free cookies (not a tracking one) and talked with many Firefox fans from around the globe. All in all, it was a productive weekend, indeed.
  • We added a new capability in our KB to set restricted visibility on specific articles. This is a staff-only feature, but we believe it’s important for everybody to be aware of this. If you haven’t, please check out this thread to get to know more!
  • Please be aware of Hubs sunset plan from this thread.
  • We opened an AAQ for NL in our support forum. Thanks to Tim Maks and the rest of the NL community, who’ve been very supportive of this work.
  • We’ve done our usual annual contributor survey in March. Thank you to every one of you who filled out the survey and shared great feedback!
  • We change something around how we communicate product release updates through bi-weekly scrum meetings. Please be aware of it by checking out this contributor thread.
  • Are you contributing to our Knowledge Base? You may want to read the recent blog posts from the content team to get to know more about what they’re up to. In short, they’re doing a lot around freshening up our knowledge base articles.

Stay updated

  • Join our discussions in the contributor forum to see what’s happening in the latest release on Desktop and mobile.
  • Watch the monthly community call if you haven’t. Learn more about what’s new in January, and March (we canceled February)! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of our Firefox Pod Meeting from AirMozilla to catch up with the latest train release. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.
  • Consider subscribing to Firefox Daily Digest to get daily updates (Mon-Fri) about Firefox from across the internet.
  • Check out SUMO Engineering Board to see what the platform team is cooking in the engine room. Also, check out this page to see our latest release notes

Community stats

KB

KB pageviews

Month Page views Vs previous month
Jan 2024 6,743,722 3.20%
Feb 2024 7,052,665 4.58%
Mar 2024 6,532,175 -7.38%
KB pageviews number is a total of English (en-US) KB pageviews

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale/pageviews

Jan 2024

Feb 2024

Mar 2024 

Localization progress (per Apr, 23)
de 2,425,154 2,601,865 2,315,952 92%
fr 1,559,222 1,704,271 1,529,981 81%
zh-CN 1,351,729 1,224,284 1,306,699 100%
es 1,171,981 1,353,200 1,212,666 25%
ja 1,019,806 1,068,034 1,051,625 34%
ru 801,370 886,163 812,882 100%
pt-BR 661,612 748,185 714,554 42%
zh-TW 598,085 623,218 366,320 3%
It 533,071 575,245 529,887 96%
pl 489,532 532,506 454,347 84%
Locale pageviews is an overall pageview from the given locale (KB and other pages)

Localization progress is the percentage of localized articles from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jan 2024 2999 72.6% 10.8% 61.3%
Feb 2024 2766 72.4% 9.5% 65.6%
Mar 2024 2516 71.5% 10.4% 71.6%

Top 5 forum contributors in the last 90 days:

Social Support

Month Total replies Total interactions Respond conversion rate
Jan 2024 33 46 71.74%
Feb 2024 25 65 38.46%
Mar 2024 14 87 16.09%

Top 5 Social Support contributors in the past 3 months: 

 

Play Store Support

Month Total replies Total interactions Respond conversion rate
Jan 2024 76 276 27.54%
Feb 2024 49 86 56.98%
Mar 2024 47 80 58.75%

Top 5 Play Store contributors in the past 3 months:

Stay connected

Mozilla Privacy BlogThe UK’s Digital Markets, Competition, and Consumers Bill will spark the UK’s digital economy, not stifle it

In today’s digital age, an open and competitive ecosystem with a diverse range of players is essential for building a resilient economy. New products and ideas must have the opportunity to grow to give people meaningful choices. Yet, this reality often falls short due to the dominance of a handful of large companies that create walled gardens by self-preferencing their services over independent competitors  –  limiting choice and hampering innovation.

The UK’s Digital Markets, Competition, and Consumers Bill (DMCCB) offers a unique opportunity to break down these barriers, paving the way for a more competitive and consumer-centric digital market. On the competition side, the DMCCB offers flexibility in allowing for targeted codes of conduct to regulate the behaviour of dominant players. This agile and future-proof approach makes it unique in the ex-ante interventions being considered around the world to rein in abuse in digital markets. An example of what such a code of conduct might look like in practice is the voluntary commitments given by Google to the CMA in the Privacy Sandbox case.

Mozilla, in line with our long history of supporting pro-competition regulatory interventions, supports the DMCCB and its underlying goal of fostering competition by empowering consumers. However, to truly deliver on this promise, the law must be robust, effective, and free from loopholes that could undermine its intent.

Last month, the House of Lords made some much needed improvements to the DMCCB – which are now slated to be debated in the House of Commons in late April/early May 2024. A high-level overview of the key positive changes and why they should remain a part of the law are:

  • Time Limits: To ensure the CMA can act swiftly and decisively, its work should be free from undue political influence. This reduces opportunities for undue lobbying and provides clarity for both consumers and companies. While it would be ideal for the CMA to be able to enforce its code of conduct, Mozilla supports the House of Lords’ amendment to introduce a 40-day time limit for the Secretary of State’s approval of CMA guidance. This is a crucial step in avoiding delays and ensuring effective enforcement. The government’s acceptance of this approach and the alternative proposal of 30 working days for debate in the House of Commons is a positive sign, which we hope is reflected in the final law.
  • Proportionality: The Bill’s approach to proportionality is vital. Introducing prohibitive proportionality requirements on remedies could weaken the CMA’s ability to make meaningful interventions, undermining the Bill’s effectiveness. Mozilla endorses the current draft of the Bill from the House of Lords, which strikes a balance by allowing for effective remedies without excessive constraints.
  • Countervailing Benefits: Similarly, the countervailing benefits exemption to CMA’s remedies, while powerful, should not be used as a loophole to justify anti-competitive practices. Mozilla urges that this exemption be reserved for cases of genuine consumer benefit by restoring the government’s original requirement that such exemptions are “indispensable”, ensuring that it does not become a ‘get out of jail free’ card for dominant players.

Mozilla remains committed to supporting the DMCCB’s swift passage through Parliament and ensuring that it delivers on its promise to empower consumers and promote innovation. We launched a petition earlier today to help push the law over the finish line. By addressing the key concerns we’ve highlighted above and maintaining a robust framework, the UK can set a global standard for digital markets and create an environment where consumers are truly in charge.

The post The UK’s Digital Markets, Competition, and Consumers Bill will spark the UK’s digital economy, not stifle it appeared first on Open Policy & Advocacy.

Don Martirealistically get rid of third-party cookies

How would a browser realistically get rid of third-party cookies, if the plan was to just replace third-party cookies, and the project requirements did not include a bunch of anticompetitive tricks too?

  1. Start offering a very scary dialog to a fraction of new users. Something like Do you want to test a new experimental feature? It might—maybe—have some privacy benefits but many sites will break. Don’t expect a lot of people to agree at first.

  2. Turn off third-party cookies for the users who did say yes in step 1, and watch the telemetry. There will be positive and negative effects, but they won’t be overwhelmingly bad because most sites have to work with other browsers.

  3. When the breakage detected in step 2 gets to be insignificant as a cause of new browser users quitting or reinstalling, start making the dialog less scary and show it to more people.

  4. Keep repeating until most new installs are third-party cookie-free, then start offering the dialog on browser upgrades.

  5. Continue, for more and more users, until you get to 95-99%. Leave the third-party cookies on for 1-5% of users for a couple of releases just to spot any lingering problems, then make third-party cookies default off, with no dialog (users would have to find the preference to re-enable them, or their sysadmin would have to push out a centralized change if some legacy corporate site still needs them).

But what about the personalized ads? Some people actually want those! Not a problem. The good news is that ad personalization can be done in an extension. Ask extension developers who have extensions that support ad personalization to sign up for a registry of ad personalization extensions, then keep track of how many users are installing each one. Adtech firms don’t (usually?) have personalization extensions today, but every company can develop one on its own schedule, with less uncertainty and fewer dependencies and delays than the current end of cookies mess. The extension development tools are really good now.

As soon as an ad personalization extension can pass an independent security audit (done by a company agreed on by the extension developer and the browser vendor) and get, say, 10,000 users, then the browser can put it on a choice screen that gets shown for new installs and, if added since last upgrade, upgrades. (The browser could give the dogmatic anti-personalization users a preference to opt out of these choice screens if they really wanted to dig in and find it.) This makes the work of competition regulators much easier—they just have to check that the browser vendor’s own ad personalization extension gets fair treatment with competing ones.

And we’re done. The privacy people and the personalized ad people get what they want with much less drama and delay, the whole web ad business isn’t stuck queued up waiting for one development team, and all that’s missing is the anticompetitive stuff that has been making end of cookies work such a pain since 2019.

Related

the 30-40-30 rule An updated list of citations to user research on how many people want personalized ads

Can database marketing sell itself to the people in the database? Some issues that an ad personalization extension might have to address in order to get installs

User tracking as Chesterton’s Fence What tracking-based advertising still offers (that alternatives don’t)

Catching up to Safari? Some features that Apple has done right, with opportunities for other browsers to think different(ly)

Bonus links

An open letter to the advertising punditry I personally got involved in the Inventory Quality reviews to make sure that the data scientists weren’t pressured by the business and could find the patterns–like ww3 [dot] forbes [dot] com–and go after them.

The Rise of Large-Language-Model Optimization The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.

The Man Who Killed Google Search [M]any found that the update mostly rolled back changes, and traffic was increasing to sites that had previously been suppressed by Google Search’s “Penguin” update from 2012 that specifically targeted spammy search results, as well as those hit by an update from an August 1, 2018…

Lawsuit in London to allege Grindr shared users’ HIV status with ad firms (This is why you can safely mute anybody who uses the expression k-anonymity, the info about yourself that you most want to keep private is true for more than k other people.)

UK children bombarded by gambling ads and images online, charity warns (attention parents: copy the device rules that Big Tech big shots maintain for their own children, not what they want for yours)

Mozilla Privacy BlogWork Gets Underway on a New Federal Privacy Proposal

At Mozilla, safeguarding privacy has been core to our mission for decades — we believe that individuals’ security and privacy on the Internet are fundamental and must not be treated as optional. We have long advocated for a federal privacy law to ensure consumers have control over their data and that companies are accountable for their privacy practices.

Earlier this month, House Committee on Energy and Commerce Chair Cathy McMorris Rodgers (R-WA) and Senate Committee on Commerce, Science and Transportation Chair Maria Cantwell (D-WA) unveiled a discussion draft of the American Privacy Rights Act of 2024 (APRA). The Act is a welcome bipartisan effort to create a unified privacy standard across the United States, with the promise of finally protecting the privacy of all Americans.

At Mozilla, we are committed to the principle of data minimization – a concept that’s fundamental in effective privacy legislation – and we are pleased to see it at the core of APRA. Data minimization means we conscientiously collect only the necessary data, ensure its protection, and provide clear and concise explanations about what data we collect and why. We are also happy to see additional strong language from the American Data Privacy and Protect Act (ADPPA) reflected in this new draft, including non-discrimination provisions and a universal opt-out mechanism (though we support clarification that ensures allowance of multiple mechanisms).

However, the APRA discussion draft has open questions that must be refined. These include how APRA handles protections for children, options for strengthening data brokers provisions even further (such as a centralized mechanism for opt-out rights), and key definitions that require clarity around advertising. We look forward to engaging with policymakers as the process advances.

Achieving meaningful reform in the U.S. is long overdue. In an era where digital privacy concerns are on the rise, it’s essential to establish clear and enforceable privacy rights for all Americans. Mozilla stands ready to contribute to the dialogue on APRA and collaborate toward achieving comprehensive privacy reform. Together, we can prioritize the interests of individuals and cultivate trust in the digital ecosystem.

 

The post Work Gets Underway on a New Federal Privacy Proposal appeared first on Open Policy & Advocacy.

Mozilla Privacy BlogNet Neutrality is Back!

Yesterday, the Federal Communications Commission (FCC) voted 3-2 to reinstate net neutrality rules and protect consumers online. We applaud this decision to keep the internet open and accessible to all, and reverse the 2018 roll-back of net neutrality protections. Alongside our many partners and allies, Mozilla has been a long time proponent of net neutrality across the world and in U.S. states, and mobilized hundreds of thousands of people over the years.

The new FCC order reclassifies broadband internet as a “telecommunications service” and prevents ISPs from blocking, throttling, or paid prioritization of traffic. This action restores meaningful and enforceable FCC oversight and protection on the internet, and unlocks innovation, competition, and free expression online.

You can read Mozilla’s submission to the FCC on the proposed Safeguarding and Securing the Open Internet rules in December 2023 here and additional reply comments in January 2024 here.

Net neutrality and openness are essential parts of how we experience the internet, and as illustrated during the COVID pandemic, can offer important protections – so it shouldn’t come as a surprise that such a majority of Americans support it. Yesterday’s decision reaffirms the internet is and should remain a public resource, where companies cannot abuse their market power to the detriment of consumers, and where actors large and small operate on a level playing field.

Earlier this month, Mozilla participated in a roundtable discussion with experts and allies hosted by Chairwoman Rosenworcel at the Santa Clara County Fire Department. The event location highlighted the importance of net neutrality, as the site where Verizon throttled firefighters’ internet speeds in the midst of fighting a raging wildfire. You can watch the full press conference below, and read coverage of the event here.

We thank the FCC for protecting these vital net neutrality safeguards, and we look forward to seeing the details of the final order when released.

The post Net Neutrality is Back! appeared first on Open Policy & Advocacy.

The Servo BlogThis month in Servo: Acid2 redux, Servo book, Qt demo, and more!

Servo nightly, now rendering Acid2 perfectly <figcaption>Servo now renders Acid2 perfectly, but like all browsers, only at 1x dpi.</figcaption>

Back in November, Servo’s new layout engine passed Acid1, and this month, thanks to a bug-squashing sprint by @mrobinson and @Loirooriol, we now pass Acid2!

We would also like to thank you all for your generous support! Since we moved to Open Collective and GitHub Sponsors in March, we have received 1578 USD (after fees), including 1348 USD/month (before fees) in recurring donations. This smashed our first two goals, and is a respectable part of the way towards our next goal of 10000 USD/month. For more details, see our Sponsorship page and announcement post.

1348 USD/month
10000

We are still receiving donations from 19 people on LFX, and we’re working on transferring the balance to our new fund, but we will stop accepting donations there soon — please move your recurring donations to GitHub or Open Collective. As always, use of these funds will be decided transparently in the Technical Steering Committee, starting with the TSC meeting on 29 April.

The Servo book, a book much like the Rust book

Servo’s docs are moving to the Servo book, and a very early version of this is now online (@delan, servo/book)! The goal is to unify our many sources of documentation, including the hacking quickstart guide, building Servo page, Servo design page, and other in-tree docs and wiki pages, into a book that’s richer and easier to search and navigate.

Servo now supports several new features in its nightly builds:

As of 2024-04-05, we now support non-autoplay <video> (@eerii, media#419, #32001), as long as the page provides its own controls, as well as the ‘baseline-source’ property (@MunishMummadi, #31904, #31913). Both of these contributors started out as Outreachy participants, and we’re thrilled to see their continued work on improving Servo.

We’ve also landed several other rendering improvements:

Our font rendering has improved, with support for selecting the correct weight and style in indexed fonts (.ttc) on Linux (@mukilan, @mrobinson, #32127), as well as support for emoji font fallback on macOS (@mrobinson, #32122). Note that color emoji are not yet supported.

Other big changes are coming to Servo’s font loading and rendering, thanks to @mrobinson’s font system redesign RFC (#32033). Work has already started on this (@mrobinson, @mukilan, #32034, #32038, #32100, #32101, #32115), with the eventual goal of making font data zero-copy readable from multiple threads. This in turn will fix several major issues with font caching, including cached font data leaking over time and between pages, unnecessary loading from disk, and unnecessary copying to layout.

We’ve also started simplifying the script–layout interface (@mrobinson, #31937, #32081), since layout was merged into the script thread, and script can now call into layout without IPC.

Embedding and dev changes

Servo running in a Qt app via CXX-Qt <figcaption>The prototype shows that Servo can be integrated with a Qt app via CXX-Qt.</figcaption>

A prototype for integrating Servo with Qt was built by @ahayzen-kdab and @vimpostor and shown at Embedded World 2024. We’re looking forward to incorporating their feedback from this to improve Servo’s embedding API. For more details, check out their GitHub repo and Embedding the Servo Web Engine in Qt.

Servo now supports multiple concurrent webviews (@wusyong, @delan, @atbrakhi, #31417, #32067)! This is a big step towards making Servo a viable embedded webview, and we will soon use it to implement tabbed browsing in servoshell (@delan, #31545).

Three of the slowest crates in the Servo build process are mozjs_sys, mozangle, and script. The first two compile some very large C++ libraries in their build scripts — SpiderMonkey and ANGLE respectively — and the third blocks on the first two. They can account for over two minutes of build time, even on a very fast machine (AMD 7950X), and a breaking change in newer versions of GNU Make (mozjs#375) can make mozjs_sys take over eight minutes to build!

mozjs_sys now uses a prebuilt version of SpiderMonkey by default (@wusyong, @sagudev, mozjs#450, #31824), cutting clean build times by over seven minutes on a very fast machine (see above). On Linux with Nix (the package manager), where we run an unaffected version of GNU Make, it can still save over 100 seconds on a quad-core CPU with SMT. Further savings will be possible once we do the same for mozangle.

If you use NixOS, or any Linux distro with Nix, you can now get a shell with all of the tools and dependencies needed to build and run Servo by typing nix-shell (@delan, #32035), without also needing to type etc/shell.nix.

As for CI, our experimental Android build now supports aarch64 (@mukilan, #32137), in addition to Android on armv7, x86_64, and i686, and we’ve improved flakiness in the WebGPU tests (@sagudev, #31952) and macOS builds (@mrobinson, #32005).

Conferences and events

Earlier this month, Rakhi Sharma gave her talk A year of Servo reboot: where are we now? at Open Source Summit North America (slides; recording available soon) and at the Seattle Rust User Group (slides).

In the Netherlands, Gregory Terzian will be presenting Modular Servo: Three Paths Forward at the GOSIM Conference 2024, on 6 May at 15:10 local time (13:10 UTC). That’s the same venue as RustNL 2024, just one day earlier, and you can also find Gregory, Rakhi, and Nico at RustNL afterwards. See you there!

Will Kahn-Greenecrashstats-tools v2.0.0 released!

What is it?

crashstats-tools is a set of command-line tools for working with Crash Stats (https://crash-stats.mozilla.org/).

crashstats-tools comes with four commands:

  • supersearch: for performing Crash Stats Super Search queries

  • supersearchfacet: for performing aggregations, histograms, and cardinality Crash Stats Super Search queries

  • fetch-data: for fetching raw crash, dumps, and processed crash data for specified crash ids

  • reprocess: for sending crash report reprocess requests

v2.0.0 released!

There have been a lot of improvements since the last blog post for the v1.0.1 release. New commands, new features, improved cli ui, etc.

v2.0.0 focused on two major things:

  1. improving supersearchfacet to support nested aggregation, histogram, and cardinality queries

  2. moving some of the code into a crashstats_tools.libcrashstats module improving its use as a library

Improved supersearchfacet

The other day, Alex and team finished up the crash reporter Rust rewrite. The crash reporter rewrite landed and is available in Firefox, nightly channel, where build_id >= 20240321093532.

The crash reporter is one of the clients that submits crash reports to Socorro which is now maintained by the Observability Team. Firefox has multiple crash reporter clients and there are many ways that crash reports can get submitted to Socorro.

One of the changes we can see in the crash report data now is the change in User-Agent header. The new rewritten crash reporter sends a header of crash-reporter/1.0.0. That gets captured by the collector and put in the raw crash metadata.user_agent field. It doesn't get indexed, so we can't search on it directly.

We can get a sampling of the last 100 crash reports, download the raw crash data, and look at the user agents.

16 out of 100 crash reports were submitted by the new crash reporter. We were surprised there are so many Firefox user agents. We discussed this on Slack. I loosely repeat it here because it's a great way to show off some of the changes of supersearchfacet in v2.0.0.

First, the rewritten crash reporter only affects the parent (aka main) process. The other processes have different crash reporters that weren't rewritten.

How many process types are there for Firefox crash reports in the last week? We can see that in the ProcessType annotation (docs) which is processed and saved in the process_type field (docs).

Judging by that output, I would expect to see a higher percentage of crashreporter/1.0.0 in our sampling of 100 crash reports.

Turns out that Firefox uses different code to submit crash reports not just by process type, but also by user action. That's in the SubmittedFrom annotation (docs) which is processed and saved in the submitted_from field (docs).

What is "Auto"? The user can opt-in to auto-send crash reports. When Firefox upgrades and this setting is set, then Firefox will auto-send any unsubmitted crash reports. The nightly channel has two updates a day, so there's lots of opportunity for this event to trigger.

What're the counts for submitted_from/process_type pairs?

We can spot check these different combinations to see what the user-agent looks like.

For brevity, we'll just look at parent / Client in this blog post.

Seems like the crash reporter rewrite only affects crash reports where ProcessType=parent and SubmittedFrom=Client. All the other process_type/submitted_from combinations get submitted a different way where the user agent is the browser itself.

How many crash reports has the new crash reporter submitted over time?

There are more examples in the crashstats-tools README.

crashstats_tools.libcrashstats library

Starting with v2.0.0, you can use crashstats_tools.libcrashstats as a library for Python scripts.

For example:

libcrashstats makes using the Crash Stats API a little more ergonomic.

See the crashstats_tools.libcrashstats library documentation.

Be thoughtful about using data

Make sure to use these tools in compliance with our data policy:

https://crash-stats.mozilla.org/documentation/protected_data_access/

Where to go for more

See the project on GitHub which includes a README which contains everything about the project including examples of usage, the issue tracker, and the source code:

https://github.com/willkg/crashstats-tools

Let me know whether this helps you!

Hacks.Mozilla.OrgLlamafile’s progress, four months in

When Mozilla’s Innovation group first launched the llamafile project late last year, we were thrilled by the immediate positive response from open source AI developers. It’s become one of Mozilla’s top three most-favorited repositories on GitHub, attracting a number of contributors, some excellent PRs, and a growing community on our Discord server.

Through it all, lead developer and project visionary Justine Tunney has remained hard at work on a wide variety of fundamental improvements to the project. Just last night, Justine shipped the v0.8 release of llamafile, which includes not only support for the very latest open models, but also a number of big performance improvements for CPU inference.

As a result of Justine’s work, today llamafile is both the easiest and fastest way to run a wide range of open large language models on your own hardware. See for yourself: with llamafile, you can run Meta’s just-released LLaMA 3 model–which rivals the very best models available in its size class–on an everyday Macbook.

How did we do it? To explain that, let’s take a step back and tell you about everything that’s changed since v0.1.

tinyBLAS: democratizing GPU support for NVIDIA and AMD

llamafile is built atop the now-legendary llama.cpp project. llama.cpp supports GPU-accelerated inference for NVIDIA processors via the cuBLAS linear algebra library, but that requires users to install NVIDIA’s CUDA SDK. We felt uncomfortable with that fact, because it conflicts with our project goal of building a fully open-source and transparent AI stack that anyone can run on commodity hardware. And besides, getting CUDA set up correctly can be a bear on some systems. There had to be a better way.

With the community’s help (here’s looking at you, @ahgamut and @mrdomino!), we created our own solution: it’s called tinyBLAS, and it’s llamafile’s brand-new and highly efficient linear algebra library. tinyBLAS makes NVIDIA acceleration simple and seamless for llamafile users. On Windows, you don’t even need to install CUDA at all; all you need is the display driver you’ve probably already installed.

But tinyBLAS is about more than just NVIDIA: it supports AMD GPUs, as well. This is no small feat. While AMD commands a respectable 20% of today’s GPU market, poor software and driver support have historically made them a secondary player in the machine learning space. That’s a shame, given that AMD’s GPUs offer high performance, are price competitive, and are widely available.

One of llamafile’s goals is to democratize access to open source AI technology, and that means getting AMD a seat at the table. That’s exactly what we’ve done: with llamafile’s tinyBLAS, you can now easily make full use of your AMD GPU to accelerate local inference. And, as with CUDA, if you’re a Windows user you don’t even have to install AMD’s ROCm SDK.

All of this means that, for many users, llamafile will automatically use your GPU right out of the box, with little to no effort on your part.

CPU performance gains for faster local AI

Here at Mozilla, we are keenly interested in the promise of “local AI,” in which AI models and applications run directly on end-user hardware instead of in the cloud. Local AI is exciting because it opens up the possibility of more user control over these systems and greater privacy and security for users.

But many consumer devices lack the high-end GPUs that are often required for inference tasks. llama.cpp has been a game-changer in this regard because it makes local inference both possible and usably performant on CPUs instead of just GPUs. 

Justine’s recent work on llamafile has now pushed the state of the art even further. As documented in her detailed blog post on the subject, by writing 84 new matrix multiplication kernels she was able to increase llamafile’s prompt evaluation performance by an astonishing 10x compared to our previous release. This is a substantial and impactful step forward in the quest to make local AI viable on consumer hardware.

This work is also a great example of our commitment to the open source AI community. After completing this work we immediately submitted a PR to upstream these performance improvements to llama.cpp. This was just the latest of a number of enhancements we’ve contributed back to llama.cpp, a practice we plan to continue.

Raspberry Pi performance gains

Speaking of consumer hardware, there are few examples that are both more interesting and more humble than the beloved Raspberry Pi. For a bargain basement price, you get a full-featured computer running Linux with plenty of computing power for typical desktop uses. It’s an impressive package, but historically it hasn’t been considered a viable platform for AI applications.

Not any more. llamafile has now been optimized for the latest model (the Raspberry Pi 5), and the result is that a number of small LLMs–such as Rocket-3B (download), TinyLLaMA-1.5B (download), and Phi-2 (download)–run at usable speeds on one of the least expensive computers available today. We’ve seen prompt evaluation speeds of up to 80 tokens/sec in some cases!

Keeping up with the latest models

The pace of progress in the open model space has been stunningly fast. Over the past few months, hundreds of models have been released or updated via fine-tuning. Along the way, there has been a clear trend of ever-increasing model performance and ever-smaller model sizes.

The llama.cpp project has been doing an excellent job of keeping up with all of these new models, frequently rolling-out support for new architectures and model features within days of their release.

For our part we’ve been keeping llamafile closely synced with llama.cpp so that we can support all the same models. Given the complexity of both projects, this has been no small feat, so we’re lucky to have Justine on the case.

Today, you can today use the very latest and most capable open models with llamafile thanks to her hard work. For example, we were able to roll-out llamafiles for Meta’s newest LLaMA 3 models–8B-Instruct and 70B-Instruct–within a day of their release. With yesterday’s 0.8 release, llamafile can also run Grok, Mixtral 8x22B, and Command-R.

Creating your own llamafiles

Since the day that llamafile shipped people have wanted to create their own llamafiles. Previously, this required a number of steps, but today you can do it with a single command, e.g.:

llamafile-convert [model.gguf]

In just moments, this will produce a “model.llamafile” file that is ready for immediate use. Our thanks to community member @chan1012 for contributing this helpful improvement.

In a related development, Hugging Face recently added official support for llamafile within their model hub. This means you can now search and filter Hugging Face specifically for llamafiles created and distributed by other people in the open source community.

OpenAI-compatible API server

Since it’s built on top of llama.cpp, llamafile inherits that project’s server component, which provides OpenAI-compatible API endpoints. This enables developers who are building on top of OpenAI to switch to using open models instead. At Mozilla we very much want to support this kind of future: one where open-source AI is a viable alternative to centralized, closed, commercial offerings.

While open models do not yet fully rival the capabilities of closed models, they’re making rapid progress. We believe that making it easier to pivot existing code over to executing against open models will increase demand and further fuel this progress.

Over the past few months, we’ve invested effort in extending these endpoints, both to increase functionality and improve compatibility. Today, llamafile can serve as a drop-in replacement for OpenAI in a wide variety of use cases.

We want to further extend our API server’s capabilities, and we’re eager to hear what developers want and need. What’s holding you back from using open models? What features, capabilities, or tools do you need? Let us know!

Integrations with other open source AI projects

Finally, it’s been a delight to see llamafile adopted by independent developers and integrated into leading open source AI projects (like Open Interpreter). Kudos in particular to our own Kate Silverstein who landed PRs that add llamafile support to LangChain and LlamaIndex (with AutoGPT coming soon).

If you’re a maintainer or contributor to an open source AI project that you feel would benefit from llamafile integration, let us know how we can help.

Join us!

The llamafile project is just getting started, and it’s also only the first step in a major new initiative on Mozilla’s part to contribute to and participate in the open source AI community. We’ll have more to share about that soon, but for now: I invite you to join us on the llamafile project!

The best place to connect with both the llamafile team at Mozilla and the overall llamafile community is over at our Discord server, which has a dedicated channel just for llamafile. And of course, your enhancement requests, issues, and PRs are always welcome over at our GitHub repo.

I hope you’ll join us. The next few months are going to be even more interesting and unexpected than the last, both for llamafile and for open source AI itself.

 

The post Llamafile’s progress, four months in appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 544

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is scandir, a high-performance file tree scanner.

Thanks to Marty B. for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

432 pull requests were merged in the last week

Rust Compiler Performance Triage

A week dominated by small mixed changes to perf with improvements slightly outweighing regressions. There were no pure regressions, and many of the mixed perf results were deemed worth it for their potential improvements to runtime performance through further optimization from LLVM.

Triage done by @rylev. Revision range: ccfcd950..a77f76e2

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.4% [0.2%, 1.8%] 57
Regressions ❌
(secondary)
0.4% [0.2%, 1.9%] 26
Improvements ✅
(primary)
-0.8% [-3.4%, -0.2%] 50
Improvements ✅
(secondary)
-0.6% [-1.9%, -0.1%] 32
All ❌✅ (primary) -0.2% [-3.4%, 1.8%] 107

0 Regressions, 5 Improvements, 6 Mixed; 2 of them in rollups 62 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-04-24 - 2024-05-22 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

The learning curve for Rust is relatively steep compared to other languages, but once you climb it you'll never look down.

BD103 on Mastodon

Thanks to BD103 for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Niko MatsakisSized, DynSized, and Unsized

Extern types have been blocked for an unreasonably long time on a fairly narrow, specialized question: Rust today divides all types into two categories — sized, whose size can be statically computed, and unsized, whose size can only be computed at runtime. But for external types what we really want is a third category, types whose size can never be known, even at runtime (in C, you can model this by defining structs with an unknown set of fields). The problem is that Rust’s ?Sized notation does not naturally scale to this third case. I think it’s time we fixed this. At some point I read a proposal — I no longer remember where — that seems like the obvious way forward and which I think is a win on several levels. So I thought I would take a bit of time to float the idea again, explain the tradeoffs I see with it, and explain why I think the idea is a good change.

TL;DR: write T: Unsized in place of T: ?Sized (and sometimes T: DynSized)

The basic idea is to deprecate the ?Sized notation and instead have a family of Sized supertraits. As today, the default is that every type parameter T gets a T: Sized bound unless the user explicitly chooses one of the other supertraits:

/// Types whose size is known at compilation time (statically).
/// Implemented by (e.g.) `u32`. References to `Sized` types
/// are "thin pointers" -- just a pointer.
trait Sized: DynSized { }

/// Types whose size can be computed at runtime (dynamically).
/// Implemented by (e.g.) `[u32]` or `dyn Trait`.
/// References to these types are "wide pointers",
/// with the extra metadata making it possible to compute the size
/// at runtime.
trait DynSized: Unsized { }

/// Types that may not have a knowable size at all (either statically or dynamically).
/// All types implement this, but extern types **only** implement this.
trait Unsized { }

Under this proposal, T: ?Sized notation could be converted to T: DynSized or T: Unsized. T: DynSized matches the current semantics precisely, but T: Unsized is probably what most uses actually want. This is because most users of T: ?Sized never compute the size of T but rather just refer to existing values of T by pointer.

Credit where credit is due?

For the record, this design is not my idea, but I’m not sure where I saw it. I would appreciate a link so I can properly give credit.

Why do we have a default T: Sized bound in the first place?

It’s natural to wonder why we have this T: Sized default in the first place. The short version is that Rust would be very annoying to use without it. If the compiler doesn’t know the size of a value at compilation time, it cannot (at least, cannot easily) generate code to do a number of common things, such as store a value of type T on the stack or have structs with fields of type T. This means that a very large fraction of generic type parameters would wind up with T: Sized.

So why the ?Sized notation?

The ?Sized notation was the result of a lot of discussion. It satisfied a number of criteria.

? signals that the bound operates in reverse

The ? is meant to signal that a bound like ?Sized actually works in reverse from a normal bound. When you have T: Clone, you are saying “type T must implement Clone”. So you are narrowing the set of types that T could be: before, it could have been both types that implement Clone and those that do not. After, it can only be types that implement Clone. T: ?Sized does the reverse: before, it can only be types that implement Sized (like u32), but after, it can also be types that do not (like [u32] or dyn Debug). Hence the ?, which can be read as “maybe” — i.e., T is “maybe” Sized.

? can be extended to other default bounds

The ? notation also scales to other default traits. Although we’ve been reluctant to exercise this ability, we wanted to leave room to add a new default bound. This power will be needed if we ever adopt “must move” types1 or add a bound like ?Leak to signal a value that cannot be leaked.

But ? doesn’t scale well to “differences in degree”

When we debated the ? notation, we thought a lot about extensibility to other orthogonal defaults (like ?Leak), but we didn’t consider extending a single dimension (like Sized) to multiple levels. There is no theoretical challenge. In principle we could say…

  • T means T: Sized + DynSized
  • T: ?Sized drops the Sized default, leaving T: DynSized
  • T: ?DynSized drops both, leaving any type T

…but I personally find that very confusing. To me, saying something “might be statically sized” does not signify that it is dynamically sized.

And ? looks “more magical” than it needs to

Despite knowing that T: ?Sized operates in reverse, I find that in practice it still feels very much like other bounds. Just like T: Debug gives the function the extra capability of generating debug info, T: ?Sized feels to me like it gives the function an extra capability: the ability to be used on unsized types. This logic is specious, these are different kinds of capabilities, but, as I said, it’s how I find myself thinking about it.

Moreover, even though I know that T: ?Sized “most properly” means “a type that may or may not be Sized”, I find it wind up thinking about it as “a type that is unsized”, just as I think about T: Debug as a “type that is Debug”. Why is that? Well, beacuse ?Sized types may be unsized, I have to treat them as if they are unsized – i.e., refer to them only by pointer. So the fact that they might also be sized isn’t very relevant.

How would we use these new traits?

So if we adopted the “family of sized traits” proposal, how would we use it? Well, for starters, the size_of methods would no longer be defined as T and T: ?Sized

fn size_of<T>() -> usize {}
fn size_of_val<T: ?Sized>(t: &T) -> usize {}

… but instead as T and T: DynSized

fn size_of<T>() -> usize {}
fn size_of_val<T: DynSized>(t: &T) -> usize {}

That said, most uses of ?Sized today do not need to compute the size of the value, and would be better translated to Unsized

impl<T: Unsized> Debug for &T {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) { .. }
}

Option: Defaults could also be disabled by supertraits?

As an interesting extension to today’s system, we could say that every type parameter T gets an implicit Sized bound unless either…

  1. There is an explicit weaker alternative(like T: DynSized or T: Unsized);
  2. Or some other bound T: Trait has an explicit supertrait DynSized or Unsized.

This would clarify that trait aliases can be used to disable the Sized default. For example, today, one might create a Value trait is equivalent to Debug + Hash + Org, roughly like this:

trait Value: Debug + Hash + Ord {
    // Note that `Self` is the *only* type parameter that does NOT get `Sized` by default
}

impl<T: ?Sized + Debug + Hash + Ord> Value for T {}

But what if, in your particular data structure, all values are boxed and hence can be unsized. Today, you have to repeat ?Sized everywhere:

struct Tree<V: ?Sized + Value> {
    value: Box<V>,
    children: Vec<Tree<V>>,
}

impl<V: ?Sized + Value> Tree<V> {  }

With this proposal, the explicit Unsized bound could be signaled on the trait:

trait Value: Debug + Hash + Ord + Unsized {
    // Note that `Self` is the *only* type parameter that does NOT get `Sized` by default
}

impl<T: Unsized + Debug + Hash + Ord> Value for T {}

which would mean that

struct Tree<V: Value> {  }

would imply V: Unsized.

Alternatives

Different names

The name of the Unsized trait in particular is a bit odd. It means “you can treat this type as unsized”, which is true of all types, but it sounds like the type is definitely unsized. I’m open to alternative names, but I haven’t come up with one I like yet. Here are some alternatives and the problems with them I see:

  • Unsizeable — doesn’t meet our typical name conventions, has overlap with the Unsize trait
  • NoSize, UnknownSize — same general problem as Unsize
  • ByPointer — in some ways, I kind of like this, because it says “you can work with this type by pointer”, which is clearly true of all types. But it doesn’t align well with the existing Sized trait — what would we call that, ByValue? And it seems too tied to today’s limitations: there are, after all, ways that we can make DynSized types work by value, at least in some places.
  • MaybeSized — just seems awkward, and should it be MaybeDynSized?

All told, I think Unsized is the best name. It’s a bit wrong, but I think you can understand it, and to me it fits the intuition I have, which is that I mark type parameters as Unsized and then I tend to just think of them as being unsized (since I have to).

Some sigil

Under this proposal, the DynSized and Unsized traits are “magic” in that explicitly declaring them as a bound has the impact of disabling a default T: Sized bound. We could signify that in their names by having their name be prefixed with some sort of sigil. I’m not really sure what that sigil would be — T: %Unsized? T: ?Unsized? It all seems unnecessary.

Drop the implicit bound altogether

The purist in me is tempted to question whether we need the default bound. Maybe in Rust 2027 we should try to drop it altogether. Then people could write

fn size_of<T: Sized>() -> usize {}
fn size_of_val<T: DynSized>(t: &T) -> usize {}

and

impl<T> Debug for &T {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) { .. }
}

Of course, it would also mean a lot of Sized bounds cropping up in surprising places. Beyond random functions, consider that every associated type today has a default Sized bound, so you would need

trait Iterator {
    type Item: Sized;
}

Overall, I doubt this idea is worth it. Not surprising: it was deemed too annoying before, and now it has the added problem of being hugely disruptive.

Conclusion

I’ve covered a design to move away from ?Sized bounds and towards specialized traits. There are avrious “pros and cons” to this proposal but one aspect in particular feels common to this question and many others: when do you make two “similar but different” concepts feel very different — e.g., via special syntax like T: ?Sized — and when do you make them feel very similar — e.g., via the idea of “special traits” where a bound like T: Unsized has extra meaning (disabling defaults).

There is a definite trade-off here. Distinct syntax help avoid potential confusion, but it forces people to recognize that something special is going on even when that may not be relevant or important to them. This can deter folks early on, when they are most “deter-able”. I think it can also contribute to a general sense of “big-ness” that makes it feel like understanding the entire language is harder.

Over time, I’ve started to believe that it’s generally better to make things feel similar, letting people push off the time at which they have to learn a new concept. In this case, this lessens my fears around the idea that Unsized and DynSized traits would be confusing because they behave differently than other traits. In this particular case, I also feel that ?Sized doesn’t “scale well” to default bounds where you want to pick from one of many options, so it’s kind of the worst of both worlds – distinct syntax that shouts at you but which also fails to add clarity.

Ultimately, though, I’m not wedded to this idea, but I am interested in kicking off a discussion of how we can unblock extern types. I think by now we’ve no doubt covered the space pretty well and we should pick a direction and go for it (or else just give up on extern types).


  1. I still think “must move” types are a good idea — but that’s a topic for another post. ↩︎

Hacks.Mozilla.OrgPorting a cross-platform GUI application to Rust

Firefox’s crash reporter is hopefully not something that most users experience often. However, it is still a very important component of Firefox, as it is integral in providing insight into the most visible bugs: those which crash the main process. These bugs offer the worst user experience (since the entire application must close), so fixing them is a very high priority. Other types of crashes, such as content (tab) crashes, can be handled by the browser and reported gracefully, sometimes without the user being aware that an issue occurred at all. But when the main browser process comes to a halt, we need another separate application to gather information about the crash and interact with the user.

This post details the approach we have taken to rewrite the crash reporter in Rust. We discuss the reasoning behind this rewrite, what makes the crash reporter a unique application, the architecture we used, and some details of the implementation.

Why Rewrite?

Even though it is important to properly handle main process crashes, the crash reporter hasn’t received significant development in a while (aside from development to ensure that crash reports and telemetry continue to reliably be delivered)! It has long been stuck in a local maximum of “good enough” and “scary to maintain”: it features 3 individual GUI implementations (for Windows, GTK+ for Linux, and macOS), glue code abstracting a few things (mostly in C++, and Objective-C for macOS), a binary blob produced by obsoleted Apple development tools, and no test suite. Because of this, there is a backlog of features and improvements which haven’t been acted on.

We’ve recently had a number of successful pushes to decrease crash rates (including both big leaps and many small bug fixes), and the crash reporter has functioned well enough for our needs during this time. However, we’ve reached an inflection point where improving the crash reporter would provide valuable insight to enable us to decrease the crash rate even further. For the reasons previously mentioned, improving the current codebase is difficult and error-prone, so we deemed it appropriate to rewrite the application so we can more easily act on the feature backlog and improve crash reports.

Like many components of Firefox, we decided to use Rust for this rewrite to produce a more reliable and maintainable program. Besides the often-touted memory safety built into Rust, its type system and standard library make reasoning about code, handling errors, and developing cross-platform applications far more robust and comprehensive.

Crash Reporting is an Edge Case

There are a number of features of the crash reporter which make it quite unique, especially compared to other components which have been ported to Rust. For one thing, it is a standalone, individual program; basically no other components of Firefox are used in this way. Firefox itself launches many processes as a means of sandboxing and insulating against crashes, however these processes all talk to one another and have access to the same code base.

The crash reporter has a very unique requirement: it must use as little as possible of the Firefox code base, ideally none! We don’t want it to rely on code which may be buggy and cause the reporter itself to crash. Using a completely independent implementation ensures that when a main process crash does occur, the cause of that crash won’t affect the reporter’s functionality as well.

The crash reporter also necessarily has a GUI. This alone may not separate it from other Firefox components, but we can’t leverage any of the cross-platform rendering goodness that Firefox provides! So we need to implement a cross-platform GUI independent of Firefox as well. You might think we could reach for an existing cross-platform GUI crate, however we have a few reasons not to do so.

  • We want to minimize the use of external code: to improve crash reporter reliability (which is paramount), we want it to be as simple and auditable as possible.
  • Firefox vendors all dependencies in-tree, so we are hesitant to bring in large dependencies (GUI libraries are likely pretty sizable).
  • There are only a few third-party crates that provide a native OS look and feel (or actually use native GUI APIs): it’s desirable for the crash reporter to have a native feel to be familiar to users and take advantage of accessibility features.

So all of this is to say that third-party cross-platform GUI libraries aren’t a favorable option.

These requirements significantly narrow the approach that can be used.

Building a GUI View Abstraction

In order to make the crash reporter more maintainable (and make it easier to add new features in the future), we want to have as minimal and generic platform-specific code as possible. We can achieve this by using a simple UI model that can be converted into native GUI code for each platform. Each UI implementation will need to provide two methods (over arbitrary platform-specific &self data):

/// Run a UI loop, displaying all windows of the application until it terminates.
fn run_loop(&self, app: model::Application)

/// Invoke a function asynchronously on the UI loop thread.
fn invoke(&self, f: model::InvokeFn)

The run_loop function is pretty self-explanatory: the UI implementation takes an Application model (which we’ll discuss shortly) and runs the application, blocking until the application is complete. Conveniently, our target platforms generally have similar assumptions around threading: the UI runs in a single thread and typically runs an event loop which blocks on new events until an event signaling the end of the application is received.

There are some cases where we’ll need to run a function on the UI thread asynchronously (like displaying a window, updating a text field, etc). Since run_loop blocks, we need the invoke method to define how to do this. This threading model will make it easy to use the platform GUI frameworks: everything calling native functions will occur on a single thread (the main thread in fact) for the duration of the program.

This is a good time to be a bit more specific about exactly what each UI implementation will look like. We’ll discuss pain points for each later on. There are 4 UI implementations:

  • A Windows implementation using the Win32 API.
  • A macOS implementation using Cocoa (AppKit and Foundation frameworks).
  • A Linux implementation using GTK+ 3 (the “+” has since been dropped in GTK 4, so henceforth I’ll refer to it as “GTK”). Linux doesn’t provide its own GUI primitives, and we already ship GTK with Firefox on Linux to make a modern-feeling GUI, so we can use it for the crash reporter, too. Note that some platforms that aren’t directly supported by Mozilla (like BSDs) use the GTK implementation as well.
  • A testing implementation which will allow tests to hook into a virtual UI and poke things (to simulate interactions and read state).

One last detail before we dive in: the crash reporter (at least right now) has a pretty simple GUI. Because of this, an explicit non-goal of the development was to create a separate Rust GUI crate. We wanted to create just enough of an abstraction to cover the cases we needed in the crash reporter. If we need more controls in the future, we can add them to the abstraction, but we avoided spending extra cycles to fill out every GUI use case.

Likewise, we tried to avoid unnecessary development by allowing some tolerance for hacks and built-in edge cases. For example, our model defines a Button as an element which contains an arbitrary element, but actually supporting that with Win32 or AppKit would have required a lot of custom code, so we special case on a Button containing a Label (which is all we need right now, and an easy primitive available to us). I’m happy to say there aren’t really many special cases like that at all, but we are comfortable with the few that were needed.

The UI Model

Our model is a declarative structuring of concepts mostly present in GTK. Since GTK is a mature library with proven high-level UI concepts, this made it appropriate for our abstraction and made the GTK implementation pretty simple. For instance, the simplest way that GTK does layout (using container GUI elements and per-element margins/alignments) is good enough for our GUI, so we use similar definitions in the model. Notably, this “simple” layout definition is actually somewhat high-level and complicates the macOS and Windows implementations a bit (but this tradeoff is worth the ease of creating UI models).

The top-level type of our UI model is Application. This is pretty simple: we define an Application as a set of top-level Windows (though our application only has one) and whether the current locale is right-to-left. We inspect Firefox resources to use the same locale that Firefox would, so we don’t rely on the native GUI’s locale settings.

As you might expect, each Window contains a single root element. The rest of the model is made up of a handful of typical container and primitive GUI elements:

A class diagram showing the inheritance structure. An Application contains one or more Windows. A Window contains one Element. An Element is subclassed to Checkbox, Label, Progress, TextBox, Button, Scroll, HBox, and VBox types.

The crash reporter only needs 8 types of GUI elements! And really, Progress is used as a spinner rather than indicating any real progress as of right now, so it’s not strictly necessary (but nice to show).

Rust does not explicitly support the object-oriented concept of inheritance, so you might be wondering how each GUI element “extends” Element. The relationship represented in the picture is somewhat abstract; the implemented Element looks like:

pub struct Element {
    pub style: ElementStyle,
    pub element_type: ElementType
}

where ElementStyle contains all the common properties of elements (alignment, size, margin, visibility, and enabled state), and ElementType is an enum containing each of the specific GUI elements as variants.

Building the Model

The model elements are all intended to be consumed by the UI implementations; as such, almost all of the fields have public visibility. However, as a means of having a separate interface for building elements, we define an ElementBuilder<T> type. This type has methods that maintain assertions and provide convenience setters. For instance, many methods accept parameters that are impl Into<MemberType>, some methods like margin() set multiple values (but you can be more specific with margin_top()), etc.

There is a general impl<T> ElementBuilder<T> which provides setters for the various ElementStyle properties, and then each specific element type can also provide their own impl ElementBuilder<SpecificElement> with additional properties unique to the element type.

We combine ElementBuilder<T> with the final piece of the puzzle: a ui! macro. This macro allows us to write our UI in a declarative manner. For example, it allows us to write:

let details_window = ui! {
    Window title("Crash Details") visible(show_details) modal(true) hsize(600) vsize(400)
         halign(Alignment::Fill) valign(Alignment::Fill)
    {
         VBox margin(10) spacing(10) halign(Alignment::Fill) valign(Alignment::Fill) {
            	Scroll halign(Alignment::Fill) valign(Alignment::Fill) {
                	TextBox content(details) halign(Alignment::Fill) valign(Alignment::Fill)
            	},
            	Button halign(Alignment::End) on_click(move || *show_details.borrow_mut() = false)
             {
                 Label text("Ok")
             }
         }
     }
};

The implementation of ui! is fairly simple. The first identifier provides the element type and an ElementBuilder<T> is created. After that, the remaining method-call-like syntax forms are called on the builder (which is mutable).

Optionally, a final set of curly braces indicate that the element has children. In that case, the macro is recursively called to create them, and add_child is called on the builder with the result (so we just need to make sure a builder has an add_child method). Ultimately the syntax transformation is pretty simple, but I believe that this macro is a little bit more than just syntax sugar: it makes reading and editing the UI a fair bit clearer, since the hierarchy of elements is represented in the syntax. Unfortunately a downside is that there’s no way to support automatic formatting of such macro DSLs, so developers will need to maintain a sane formatting.

So now we have a model defined and a declarative way of building it. But we haven’t discussed any dynamic runtime behaviors here. In the above example, we see an on_click handler being set on a Button. We also see things like the Window’s visible property being set to a show_details value which is changed when on_click is pressed. We hook into this declarative UI to change or react to events at runtime using a set of simple data binding primitives with which UI implementations can interact.

Many GUI frameworks nowadays (both for Rust and other languages) have been built with the “diffing element trees” architecture (think React), where your code is (at least mostly) functional and side-effect-free and produces the GUI view as a function of the current state. This approach has its tradeoffs: for instance, it makes complicated, stateful alterations of the layout very simple to write, understand, and maintain, and encourages a clean separation of model and view! However since we aren’t writing a framework, and our application is and will remain fairly simple, the benefits of such an architecture were not worth the additional development burden. Our implementation is more similar to the MVVM architecture:

  • the model is, well, the model discussed here;
  • the views are the various UI implementations; and
  • the viewmodel is (loosely, if you squint) the collection of data bindings.

Data Binding

There are a few types which we use to declare dynamic (runtime-changeable) values. In our UI, we needed to support a few different behaviors:

  • triggering events, i.e., what happens when a button is clicked,
  • synchronized values which will mirror and notify of changes to all clones, and
  • on-demand values which can be queried for the current value.

On-demand values are used to get textbox contents rather than using a synchronized value, in an effort to avoid implementing debouncing in each UI. It may not be terribly difficult to do so, but it also wasn’t difficult to support the on-demand implementation.

As a means of convenience, we created a Property type which encompasses the value-oriented fields as well. A Property<T> can be set to either a static value (T), a synchronized value (Synchronized<T>), or an on-demand value (OnDemand<T>). It supports an impl From for each of these, so that builder methods can look like fn my_method(&mut self, value: impl Into<Property<T>>) allowing any supported value to be passed in a UI declaration.

We won’t discuss the implementation in depth (it’s what you’d expect), but it’s worth noting that these are all Clone to easily share the data bindings: they use Rc (we don’t need thread safety) and RefCell as necessary to access callbacks.

In the example from the last section, show_details is a Synchronized<bool> value. When it changes, the UI implementations change the associated window visibility. The Button on_click callback sets the synchronized value to false, hiding the window (note that the details window used in this example is never closed, it is just shown and hidden).

In a former iteration, data binding types had a lifetime parameter which specified the lifetime for which event callbacks were valid. While we were able to make this work, it greatly complicated the code, especially because there’s no way to communicate the correct covariance of the lifetime to the compiler, so there was additional unsafe code transmuting lifetimes (though it was contained as an implementation detail). These lifetimes were also infectious, requiring some of the complicated semantics regarding their safety to be propagated into the model types which stored Property fields.

Much of this was to avoid cloning values into the callbacks, but changing these types to all be Clone and store static-lifetime callbacks was worth making the code far more maintainable.

Threading and Thread Safety

The careful reader might remember that we discussed how our threading model involves interacting with the UI implementations only on the main thread. This includes updating the data bindings, since the UI implementations might have registered callbacks on them! While we could run everything in the main thread, it’s generally a much better experience to do as much off of the UI thread as possible, even if we don’t do much that’s blocking (though we will be blocking when we send crash reports). We want our business logic to default to being off of the main thread so that the UI doesn’t ever freeze. We can guarantee this with some careful design.

The simplest way to guarantee this behavior is to put all of the business logic in one (non-Clone, non-Sync) type (let’s call it Logic) and construct the UI and UI state (like Property values) in another type (let’s call it UI). We can then move the Logic value into a separate thread to guarantee that UI can’t interact with Logic directly, and vice versa. Of course we do need to communicate sometimes! But we want to ensure that this communication will always be delegated to the thread which owns the values (rather than the values directly interacting with each other).

We can accomplish this by creating an enqueuing function for each type and storing that in the opposite type. Such a function will be passed boxed functions to run on the owning thread that get a reference to the owned type (e.g., Box<dyn FnOnce(&T) + Send + 'static>). This is simple to create: for the UI thread, it is just the UI implementation’s invoke method which we briefly discussed previously. The Logic thread does nothing but run a loop which will get these functions and run them on the owned value (we just enqueue and pass them using an mpsc::channel). Now each type can asynchronously call methods on the other with the guarantee that they’ll be run on the correct thread.

In a former iteration, a more complicated scheme was used with thread-local storage and a central type which was responsible for both creating threads and delegating the functions. But with such a basic use case as two threads delegating between each other, we were able to distill this to the essential aspects needed, greatly simplifying the code.

Localization

One nice benefit of this rewrite is that we could bring the localization of the crash reporter up to speed with our modern tooling. In almost every other part of Firefox, we use fluent to handle localization. Using fluent in the crash reporter makes the experience of localizers more uniform and predictable; they do not need to understand more than one localization system (the crash reporter was one of the last holdouts of the old system). It was very easy to use in the new code, with just a bit of extra code to extract the localization files from the Firefox installation when the crash reporter is run. In the worst case scenario where we can’t find or access these files, we have the en-US definitions directly bundled in the crash reporter binary.

The UI Implementations

We won’t go into much detail about the implementations, but it’s worth talking about each a bit.

Linux (GTK)

The GTK implementation is probably the most straightforward and succinct. We use bindgen to generate Rust bindings to the GTK functions we need (avoiding vendoring any external crates). Then we simply call all of the corresponding GTK functions to set up the GTK widgets as described in the model (remember, the model was made to mirror some of the GTK concepts).

Since GTK is somewhat modern and meant to be written by humans (not automated tools like some of the other platforms), there weren’t really any pain points or unusual behaviors that needed to be addressed.

We have a handful of nice features to improve memory safety and correctness. A set of traits makes it easy to attach owned data to GObjects (ensuring data remains valid and is properly dropped when the GObject is destroyed), and a few macros set up the glue code between GTK signals and our data binding types.

Windows (Win32)

The Windows implementation may have been the most difficult to write, since Win32 GUIs are very rarely written nowadays and the API shows its age. We use the windows-sys crate to access bindings to the API (which was already vendored in the codebase for many other Windows API uses). This crate is generated directly from Windows function metadata (by Microsoft), but otherwise its bindings aren’t terribly different from what bindgen might have produced (though they are likely a bit more accurate).

There were a number of hurdles to overcome. For one thing, the Win32 API doesn’t provide any layout primitives, so the high-level layout concepts we use (which allow graceful resize/repositioning) had to be implemented manually. There’s also quite a few extra API calls just to get to a GUI that looks somewhat decent (correct window colors, font smoothing, high DPI handling, etc). Even the default font ends up being a terrible looking bitmapped font rather than the more modern system default; we needed to manually retrieve the system default and set it as the font to use, which was a bit surprising!

We have a set of traits to facilitate creating custom window classes and managing associated window data of class instances. We also have wrapper types to properly manage the lifetimes of handles and perform type conversions (mainly String to null-terminated wide strings and back) as an extra layer of safety around the API.

macOS (Cocoa/AppKit)

The macOS implementation had its tricky parts, as overwhelmingly macOS GUIs are written with XCode and there’s a lot of automated and generated portions (such as nibs). We again use bindgen to generate Rust bindings, this time for the Objective-C APIs in macOS framework headers.

Unlike Windows and GTK, you don’t get keyboard shortcuts like Cmd-C, Cmd-Q, etc, for free if creating a GUI without e.g. XCode (which generates it for you as part of a new project template). To have these typical shortcuts that users expect, we needed to manually implement the application main menu (which is what governs keyboard shortcuts). We also had to handle runtime setup like creating Objective-C autorelease pools, bringing the window and application (which are separate concepts) to the foreground, etc. Even implementing invoke to call a function on the main thread had its nuances, since modal windows use a nested event loop which would not call queued functions under the default NSRunLoop mode.

We wrote some simple helper types and a macro to make it easy to implement, register, and create Objective-C classes from Rust code. We used this for creating delegate classes as well as subclassing some controls for the implementation (like NSButton); it made it easy to safely manage the memory of Rust values underlying the classes and correctly register class method selectors.

The Test UI

We’ll discuss testing in the next section. Our testing UI is very simple. It doesn’t create a GUI, but allows us to interact directly with the model. The ui! macro supports an extra piece of syntax when tests are enabled to optionally set a string identifier for each element. We use these strings in unit tests to access and interact with the UI. The data binding types also support a few additional methods in tests to easily manipulate values. This UI allows us to simulate button presses, field entry, etc, to ensure that other UI state changes as expected as well as simulating the system side effects.

Mocking and Testing

An important goal of our rewrite was to add tests to the crash reporter; our old code was sorely lacking them (in part because unit testing GUIs is notoriously difficult).

Mocking Everything

In the new code, we can mock the crash reporter regardless of whether we are running tests or not (though it is always mocked for tests). This is important because mocking allows us to (manually) run the GUI in various states to check that the GUI implementations are correct and render well. Our mocking not only mocks the inputs to the crash reporter (environment variables, command line parameters, etc), it also mocks all side-effectful std functions.

We accomplish this by having a std module in the crate, and using crate::std throughout the rest of the code. When mocking is disabled, crate::std is simply the same as ::std. But when it is enabled, a bunch of functions that we have written are used instead. These mock the filesystem, environment, launching external commands, and other side effects. Importantly, only the minimal amount to mock the existing functions is implemented, so that if e.g. some new functions from std::fs, std::net, etc. are used, the crate will fail to compile with mocking enabled (so that we don’t miss any side effects). This might sound like a lot of effort, but you might be surprised at how little of std really needed to be mocked, and most implementations were pretty straightforward.

Now that we have our code using different mocked functions, we need to have a way of injecting the desired mock data (both in tests and in our normal mocked operation). For example, we have the ability to return some data when a File is read, but we need to be able to set that data differently for tests. Without going into too much detail, we accomplish this using a thread-local store of mock data. This way, we don’t need to change any code to accommodate the mock data; we only need to make changes where we set and retrieve it. The programming language enthusiasts out there may recognize this as a form of dynamic scoping. The implementation allows our mock data to be set with code like

mock::builder()
    .set(
        crate::std::env::MockCurrentExe,
        "work_dir/crashreporter".into(),
    )
    .run(|| crash_reporter_main())

in tests, and

pub fn current_exe() -> std::io::Result {
    Ok(MockCurrentExe.get(|r| r.clone()))
}

in our crate::std::env implementation.

Testing

With our mocking setup and test UI, we are able to extensively test the behavior of the crash reporter. The “last mile” of this testing which we can’t automate easily is whether each UI implementation faithfully represents the UI model. We manually test this with a mocked GUI for each platform.

Besides that, we are able to automatically test how arbitrary UI interactions cause the crash reporter to affect its own UI state and the environment (checking which programs are invoked and network connections are made, what happens if they fail, succeed, or timeout, etc). We also set up a mock filesystem and add assertions in various scenarios over the precise resulting filesystem state once the crash reporter completes. This greatly increases our confidence in the current behaviors and ensures that future changes will not alter them, which is of the utmost importance for such an essential component of our crash reporting pipeline.

The End Product

Of course we can’t get away with writing all of this without a picture of the crash reporter! This is what it looks like on Linux using GTK. The other GUI implementations look the same but styled with a native look and feel.

The crash reporter dialog on Linux.

Note that, for now, we wanted to keep it looking exactly the same as it previously did. So if you are unfortunate enough to see it, it shouldn’t appear as if anything has changed!

With a new, cleaned up crash reporter, we can finally unblock a number of feature requests and bug reports, such as:

We are excited to iterate and improve further on crash reporter functionality. But ultimately it’d be wonderful if you never see or use it, and we are constantly working toward that goal!

The post Porting a cross-platform GUI application to Rust appeared first on Mozilla Hacks - the Web developer blog.

Firefox NightlyWall to Wall Improvements – These Weeks in Firefox: Issue 159

Highlights

  • The team is in the early stages of adding wallpaper support! This is still very preliminary, but you can test what they’ve currently landed on Nightly:
    • Set browser.newtabpage.activity-stream.newtabWallpapers.enabled to true in about:config
    • Click on the “gear” icon in the top-right of the new tab page
    • Choose a wallpaper! Note that you get different options depending on whether or not you’re using a light or dark theme.
Firefox's New Tab page with a beautiful image of the aurora borealis set as the background wallpaper

Set a new look for new tabs!

Friends of the Firefox team

Introductions/Shout-Outs

  • Shoutout to Yi Xiong Wong for submitting 16 patches to refactor a bunch of browser.js code into a separate file (bug)

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Camille
  • Magnus Melin [:mkmelin]
  • Meera Murthy

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • As part of the ongoing work related to improving cross-browser compatibility for Manifest Version 3 extensions:
    • The options_page manifest property is supported as an alias of options_page.uiBug 1816960
    • A new webRequestAuthProvider permission allows extensions to register  webRequest.onAuthRequired blocking listeners (in addition to the webRequestBlocking permission, which is deprecated on Chrome but still supported by Firefox) – Bug 1820569
    • commands.onCommand listeners now receive details of the currently active tab – Bug 1843866
    • WebExtensions with a granted active tab permission can now call tabs.captureVisibleTab API method without any additional host permissions – Bug 1784920
    • MessageSender details received by runtime.onMessage/runtime.onConnect listeners include a new origin property – Bug 1787379

Developer Tools

DevTools
  • Artem Manushenkov added a setting to disable the split console (#1731635)
  • Yury added support for Wasm exception handling proposal in the Debugger (#1885589)
  • Emilio fixed a rendering issue that could happen after exiting Responsive Design Mode (#1888242)
  • Alex migrated the last DevTools JSMs to ESMs (#1789981, #1827382, #1888171)
  • Nicolas improved performance of the Inspector when modifying a single rule, in a stylesheet with a lot of rules (#1888079, #1888081)
  • Nicolas improved the Rules view by showing the color picker button on color functions using CSS variables in their definition (#1718894)
  • Bomsy fixed a crash in the netmonitor (#1884571)
  • Julian reverted the location of DevTools screenshots on OSX to match where Firefox screenshots are saved (#1845037)
  • Nicolas added @property rules (enabled on Nightly by default) in the Style Editor sidebar (#1886392)
WebDriver BiDi
  • New contributor: :gravyant improved the error message when the session.new command is used without a proper capabilities parameter (#1838152)
  • Julian Descottes added support for the contexts argument of the network.addIntercept command which allows to restrict a network intercept to a set of top-level browsing contexts (#1882260)
  • Sasha Borovova updated the storage.getCookies command to return third party cookies selectively, based on the value of the network.cookie.cookieBehavior and network.cookie.cookieBehavior.optInPartitioning preferences (#1879503)
  • Sasha Borovova removed the ownership and sandbox parameters for the browsingContext.locateNodes command to align with a recent specification update (#1884935)
  • Sasha Borovova updated the session.subscribe and session.unsubscribe commands to throw an error when the events or contexts parameters are empty arrays (#1887871)
  • The team completed the Milestone 10 of the project (bug list), where we implemented 50% of the commands needed to completely support Puppeteer, with 75% of the Puppeteer unit-tests passing with WebDriver BiDi. For Milestone 11 (bug list), our focus remains to implement the remaining commands and features required to fully support Puppeteer (doc).

Lint, Docs and Workflow

Migration Improvements

Picture-in-Picture

  • Thanks to :joe.scott.webster for submitting a patch that fixes PiP captions issues with several Yahoo sites (bug) and filing a follow-up ticket for AOL (bug).
    • Also thanks to Niklas Baumgardner (:niklas) for lending a hand!

Performance

Screenshots (enabled by default in Nightly)

Search and Navigation

  • Firefox Suggest experience
    • Daisuke renamed the “Learn More about Firefox Suggest” menuitem to a more direct “Manage Firefox Suggest”. Bug 1889820
    • Drew added new telemetry to measure in experiments the potential exposure of simulated results, depending on the typed search string. Bug 1881875
  • SERP categorization telemetry
    • James, Stephanie and Karandeep have landed many fixes to the storage, logging and measurements.
  • Search Config v2
    • Enabling new config in Nightly lowered the number of initialization errors for the Search Service
    • Mark fixed character-set handling. Bug 1890698
    • Mark added a new property covering the device type. Bug 1889910
    • Mandy sorted collections by engine identifier and property names, to make the config more easily navigable and diffs nicer and easier to maintain. Bug 1889247
    • Mandy updated the documentation. Bug 1889037
  • Other fixes
    • Marco fixed a bug causing engagement on certain results to be registered both as engagement and abandonment. Bug 1888627
    • Dao has fixed alignment of “switch to tab” and “visit” chiclets. Bug 1886761

Storybook/Reusable Components

Firefox NightlyFirefox Nightly Now Available for Linux on ARM64

We’re excited to share an update with people running Linux on ARM64 (also known as AArch64) architectures.

ARM64 Binaries Are Here

After launching the Firefox Nightly .deb package, feedback highlighted a demand for ARM64 builds. In response, we’re excited to now offer Firefox Nightly for ARM64 as both .tar archives and .deb packages. Keep the suggestions coming – feedback is always welcome!

  • .tar Archives: Prefer our traditional .tar.bz2 binaries? You can get them from our downloads page by selecting Firefox Nightly for Linux ARM64/AArch64.
  • .deb Packages: For updates and installation via our APT repository, you can follow these instructions and install the firefox-nightly package.

On ARM64 Build Stability

We want to be upfront about the current state of our ARM64 builds. Although we are confident in the quality of Firefox on this architecture, we are still incorporating comprehensive ARM64 testing into Firefox’s continuous integration and release pipeline. Our goal is to integrate ARM64 builds into Firefox’s extensive automated test suite, which will enable us to offer this architecture across the beta, release, and ESR channels.

Your Feedback Is Crucial

We encourage you to download the new ARM64 Firefox Nightly binaries, test them, and share your findings with us. By using these builds and reporting any issues, you’re empowering our developers to better support and test on this architecture, ultimately leading to a stable and reliable Firefox for ARM64. Please share your findings through Bugzilla and stay tuned for more updates. Thank you for your ongoing participation in the Firefox Nightly community!

IRL (podcast)Mozilla’s IRL podcast is a Shorty Awards finalist - we need your help to win!

We’re excited to share that Mozilla's IRL podcast is a Shorty Awards finalist in the Science and Technology Podcast category! If you enjoy IRL you can show your support by voting for us.

The Shorty Awards recognizes great content by brands, agencies and nonprofits. It’s really an honor to be able to feature the voices and stories of the folks who are putting people over profit in AI. A Shorty Award will help bring these stories to even more listeners. 

How to vote

1. Go to mzl.la/shorty

2. Click 'Vote in Science and Technology Podcast'

3. create a username and password (it's easy, we promise!)

4. Come back and vote every day until April 30th

We believe putting people over profit is award-worthy. Don’t you?  Thanks for your support!

Mozilla ThunderbirdAdventures In Rust: Bringing Exchange Support To Thunderbird

Microsoft Exchange is a popular choice of email service for corporations and educational institutions, and so it’s no surprise that there’s demand among Thunderbird users to support Exchange. Until recently, this functionality was only available through an add-on. But, in the next ESR (Extended Support) release of Thunderbird in July 2024, we expect to provide this support natively within Thunderbird. Because of the size of this undertaking, the first roll-out of the Exchange support will initially cover only email, with calendar and address book support coming at a later date.

This article will go into technical detail on how we are implementing support for the Microsoft Exchange Web Services mail protocol, and some idea of where we’re going next with the knowledge gained from this adventure.

Historical context

Thunderbird is a long-lived project, which means there’s lots of old code. The current architecture for supporting mail protocols predates Thunderbird itself, having been developed more than 20 years ago as part of Netscape Communicator. There was also no paid maintainership from about 2012 — when Mozilla divested and  transferred ownership of Thunderbird to its community — until 2017, when Thunderbird rejoined the Mozilla Foundation. That means years of ad hoc changes without a larger architectural vision and a lot of decaying C++ code that was not using modern standards.

Furthermore, in the entire 20 year lifetime of the Thunderbird project, no one has added support for a new mail protocol before. As such, no one has updated the architecture as mail protocols change and adapt to modern usage patterns, and a great deal of institutional knowledge has been lost. Implementing this much-needed feature is the first organization-led effort to actually understand and address limitations of Thunderbird’s architecture in an incremental fashion.

Why we chose Rust

Thunderbird is a large project maintained by a small team, so choosing a language for new work cannot be taken lightly. We need powerful tools to develop complex features relatively quickly, but we absolutely must balance this with long-term maintainability. Selecting Rust as the language for our new protocol support brings some important benefits:

  1. Memory safety. Thunderbird takes input from anyone who sends an email, so we need to be diligent about keeping security bugs out.
  2. Performance. Rust runs as native code with all of the associated performance benefits.
  3. Modularity and Ecosystem. The built-in modularity of Rust gives us access to a large ecosystem where there are already a lot of people doing things related to email which we can benefit from.

The above are all on the standard list of benefits when discussing Rust. However, there are some additional considerations for Thunderbird:

  1. Firefox. Thunderbird is built on top of Firefox code and we use a shared CI infrastructure with Firefox which already enables Rust. Additionally, Firefox provides a language interop layer called XPCOM (Cross-Platform Component Object Model), which has Rust support and allows us to call between Rust, C++, and JavaScript.
  2. Powerful tools. Rust gives us a large toolbox for building APIs which are difficult to misuse by pushing logical errors into the domain of the compiler. We can easily avoid circular references or provide functions which simply cannot be called with values which don’t make sense, letting us have a high degree of confidence in features with a large scope. Rust also provides first-class tooling for documentation, which is critically important on a small team.
  3. Addressing architectural technical debt. Introducing a new language gives us a chance to reconsider some aging architectures while benefiting from a growing language community.
  4. Platform support and portability. Rust supports a broad set of host platforms. By building modular crates, we can reuse our work in other projects, such as Thunderbird for Android/K-9 Mail.

Some mishaps along the way

Of course, the endeavor to introduce our first Rust component in Thunderbird is not without its challenges, mostly related to the size of the Thunderbird codebase. For example, there is a lot of existing code with idiosyncratic asynchronous patterns that don’t integrate nicely with idiomatic Rust. There are also lots of features and capabilities in the Firefox and Thunderbird codebase that don’t have any existing Rust bindings.

The first roadblock: the build system

Our first hurdle came with getting any Rust code to run in Thunderbird at all. There are two things you need to know to understand why:

First, since the Firefox code is a dependency of Thunderbird, you might expect that we pull in their code as a subtree of our own, or some similar mechanism. However, for historical reasons, it’s the other way around: building Thunderbird requires fetching Firefox’s code, fetching Thunderbird’s code as a subtree of Firefox’s, and using a build configuration file to point into that subtree.

Second, because Firefox’s entrypoint is written in C++ and Rust calls happen via an interoperability layer, there is no single point of entry for Rust. In order to create a tree-wide dependency graph for Cargo and avoid duplicate builds or version/feature conflicts, Firefox introduced a hack to generate a single Cargo workspace which aggregates all the individual crates in the tree.

In isolation, neither of these is a problem in itself. However, in order to build Rust into Thunderbird, we needed to define our own Cargo workspace which lives in our tree, and Cargo does not allow nesting workspaces. To solve this issue, we had to define our own workspace and add configuration to the upstream build tool, mach, to build from this workspace instead of Firefox’s. We then use a newly-added mach subcommand to sync our dependencies and lockfile with upstream and to vendor the resulting superset.

XPCOM

While the availability of language interop through XPCOM is important for integrating our frontend and backend, the developer experience has presented some challenges. Because XPCOM was originally designed with C++ in mind, implementing or consuming an XPCOM interface requires a lot of boilerplate and prevents us from taking full advantage of tools like rust-analyzer. Over time, Firefox has significantly reduced its reliance on XPCOM, making a clunky Rust+XPCOM experience a relatively minor consideration. However, as part of the previously-discussed maintenance gap, Thunderbird never undertook a similar project, and supporting a new mail protocol requires implementing hundreds of functions defined in XPCOM.

Existing protocol implementations ease this burden by inheriting C++ classes which provide the basis for most of the shared behavior. Since we can’t do this directly, we are instead implementing our protocol-specific logic in Rust and communicating with a bridge class in C++ which combines our Rust implementations (an internal crate called ews_xpcom) with the existing code for shared behavior, with as small an interface between the two as we can manage.

Please visit our documentation to learn more about how to create Rust components in Thunderbird.

Implementing Exchange support with Rust

Despite the technical hiccups experienced along the way, we were able to clear the hurdles, use, and build Rust within Thunderbird. Now we can talk about how we’re using it and the tools we’re building. Remember all the way back to the beginning of this blog post, where we stated that our goal is to support Microsoft’s Exchange Web Services (EWS) API. EWS communicates over HTTP with request and response bodies in XML.

Sending HTTP requests

Firefox already includes a full-featured HTTP stack via its necko networking component. However, necko is written in C++ and exposed over XPCOM, which as previously stated does not make for nice, idiomatic Rust. Simply sending a GET request requires a great deal of boilerplate, including nasty-looking unsafe blocks where we call into XPCOM. (XPCOM manages the lifetime of pointers and their referents, ensuring memory safety, but the Rust compiler doesn’t know this.) Additionally, the interfaces we need are callback-based. For making HTTP requests to be simple for developers, we need to do two things:

  1. Support native Rust async/await syntax. For this, we added a new Thunderbird-internal crate, xpcom_async. This is a low-level crate which translates asynchronous operations in XPCOM into Rust’s native async syntax by defining callbacks to buffer incoming data and expose it by implementing Rust’s Future trait so that it can be awaited by consumers. (If you’re not familiar with the Future concept in Rust, it is similar to a JS Promise or a Python coroutine.)
  2. Provide an idiomatic HTTP API. Now that we had native async/await support, we created another internal crate (moz_http) which provides an HTTP client inspired by reqwest. This crate handles creating all of the necessary XPCOM objects and providing Rustic error handling (much nicer than the standard XPCOM error handling).

Handling XML requests and responses

The hardest task in working with EWS is translating between our code’s own data structures and the XML expected/provided by EWS. Existing crates for serializing/deserializing XML didn’t meet our needs. serde’s data model doesn’t align well with XML, making distinguishing XML attributes and elements difficult. EWS is also sensitive to XML namespaces, which are completely foreign to serde. Various serde-inspired crates designed for XML exist, but these require explicit annotation of how to serialize every field. EWS defines hundreds of types which can have dozens of fields, making that amount of boilerplate untenable.

Ultimately, we found that existing serde-based implementations worked fine for deserializing XML into Rust, but we were unable to find a satisfactory tool for serialization. To that end, we introduced another new crate, xml_struct. This crate defines traits governing serialization behavior and uses Rust’s procedural derive macros to automatically generate implementations of these traits for Rust data structures. It is built on top of the existing quick_xml crate and designed to create a low-boilerplate, intuitive mapping between XML and Rust.  While it is in the early stages of development, it does not make use of any Thunderbird/Firefox internals and is available on GitHub.

We have also introduced one more new crate, ews, which defines types for working with EWS and an API for XML serialization/deserialization, based on xml_struct and serde. Like xml_struct, it is in the early stages of development, but is available on GitHub.

Overall flow chart

Below, you can find a handy flow chart to help understand the logical flow for making an Exchange request and handling the response. 

A bird's eye view of the flow

Fig 1. A bird’s eye view of the flow

What’s next?

Testing all the things

Before landing our next major features, we are taking some time to build out our automated tests. In addition to unit tests, we just landed a mock EWS server for integration testing. The current focus on testing is already paying dividends, having exposed a couple of crashes and some double-sync issues which have since been rectified. Going forward, new features can now be easily tested and verified.

Improving error handling

While we are working on testing, we are also busy improving the story around error handling. EWS’s error behavior is often poorly documented, and errors can occur at multiple levels (e.g., a request may fail as a whole due to throttling or incorrect structure, or parts of a request may succeed while other parts fail due to incorrect IDs). Some errors we can handle at the protocol level, while others may require user intervention or may be intractable. In taking the time now to improve error handling, we can provide a more polished implementation and set ourselves up for easier long-term maintenance.

Expanding support

We are working on expanding protocol support for EWS (via ews and the internal ews_xpcom crate) and hooking it into the Thunderbird UI. Earlier this month, we landed a series of patches which allow adding an EWS account to Thunderbird, syncing the account’s folder hierarchy from the remote server, and displaying those folders in the UI. (At present, this alpha-state functionality is gated behind a build flag and a preference.) Next up, we’ll work on fetching message lists from the remote server as well as generalizing outgoing mail support in Thunderbird.

Documentation

Of course, all of our work on maintainability is for naught if no one understands what the code does. To that end, we’re producing documentation on how all of the bits we have talked about here come together, as well as describing the existing architecture of mail protocols in Thunderbird and thoughts on future improvements, so that once the work of supporting EWS is done, we can continue building and improving on the Thunderbird you know and love.

QUESTIONS FROM YOU
EWS is deprecated for removal in 2026. Are there plans to add support for Microsoft Graph into Thunderbird?

This is a common enough question that we probably should have addressed it in the post! EWS will no longer be available for Exchange Online in October 2026, but our research in the lead-up to this project showed that there’s a significant number of users who are still using on-premise installs of Exchange Server. That is, many companies and educational institutions are running Exchange Server on their own hardware.

These on-premise installs largely support EWS, but they cannot support the Azure-based Graph API. We expect that this will continue to be the case for some time to come, and EWS provides a means of supporting those users for the foreseeable future. Additionally, we found a few outstanding issues with the Graph API (which is built with web-based services in mind, not desktop applications), and adding EWS support allows us to take some extra time to find solutions to those problems before building Graph API support.

Diving into the past has enabled a sound engineering-led strategy for dealing with the future: Thanks to the deep dive into the existing Thunderbird architecture we can begin to leverage more efficient and productive patterns and technologies when implementing protocols.

In time this will have far reaching consequences for the Thunderbird code base which will not only run faster and more reliably, but significantly reduce maintenance burden when landing bug fixes and new features.

Rust and EWS are elements of a larger effort in Thunderbird to reduce turnarounds and build resilience into the very core of the software.

The post Adventures In Rust: Bringing Exchange Support To Thunderbird appeared first on The Thunderbird Blog.

Firefox UXOn Purpose: Collectively Defining Our Team’s Mission Statement

How the Firefox User Research team crafted our mission statement

Image of person hugging Firefox logo

Firefox illustration by UX designer Gabrielle Lussier

Like many people who work at Mozilla, I’m inspired by the organization’s mission: to ensure the Internet is a global public resource, open and accessible to all. In thinking about the team I belong to, though, what’s our piece of this bigger puzzle?

The Firefox User Research team tackled this question early last year. We gathered in person for a week of team-focused activities; defining a team mission statement was on the agenda. As someone who enjoys workshop creation and strategic planning, I was on point to develop the workshop. The end goal? A team-backed statement that communicated our unique purpose and value.

Mission statement development was new territory for me. I read up on approaches for creating them and landed on a workshop design (adapted from MITRE’s Innovation Toolkit) that would enable the team to participate in a process of collectively reflecting on our work and defining our shared purpose.

To my delight, the workshop was fruitful and engaging. Not only did it lead us to a statement that resonates, it sparked meaningful discussion along the way.

Here, I outline the five workshop activities that guided us there.

1) Discuss the value of a good mission statement

We kicked off the workshop by discussing the value of a well-crafted statement. Why were we aiming to define one in the first place? Benefits include: fostering alignment between the team’s activities and objectives, communicating the team’s purpose, and helping the team to cohere around a shared direction. In contrast to a vision statement, which describes future conditions in aspirational language, a mission statement describes present conditions in concrete terms.

In our case, the team had recently grown in size to thirteen people. We had a fairly new leadership team, along with a few new members of the team. With a mix of longer tenure and newer members, and quantitative and mixed methods researchers (which at one point in the past had been on separate teams), we wanted to inspire team alignment around our shared goals and build bridges between team members.

2) Individually answer a set of questions about our team’s work

Large sheets of paper were set up around the room with the following questions:

A. What do we, as a user research team, do?

B. How do we do what we do?

C. What value do we bring?

D. Who benefits from our work?

E. Why does our team exist?

Markers in hand, team members dispersed around the room, spending a few minutes writing answers to each question until we had cycled through them all.

People in a workshop

Team members during the workshop

3) Highlight keywords and work in groups to create draft statements

Small groups were formed and were tasked with highlighting keywords from the answers provided in the previous step. These keywords served as the foundation for drafting statements, with the following format provided as a helpful guide:

Our mission is to (A — what we do) by (B — how we do it).

We (C — the value we bring) so that (D — who benefits from our work ) can (E — why we exist).

One group’s draft statement from Step 3

4) Review and discuss resulting statements

Draft statements emerged remarkably fluidly from the activities in Steps 2 and 3. Common elements were easy to identify (we develop insights and shape product decisions), while the differences sparked worthwhile discussions. For example: How well does the term ‘human-centered’ capture the work of our quantitative researchers? Is creating empathy for our users a core part of our purpose? How does our value extend beyond impacting product decisions?

As a group, we reviewed and discussed the statements, crossing out any jargony terms and underlining favoured actions and words. After this step, we knew we were close to a final statement. We concluded the workshop, with a plan to revisit the statements when we were back to work the following week.

5) Refine and share for feedback

The following week, we refined our work and shared the outcome with the lead of our Content Design practice for review. Her sharp feedback included encouraging us to change the phrase ‘informing strategic decisions’ to ‘influencing strategic decisions’ to articulate our role as less passive — a change we were glad to make. After another round of editing, we arrived at our final mission statement:

Our mission is to influence strategic decisions through systematic, qualitative, and quantitative research. We develop insights that uncover opportunities for Mozilla to build an open and healthy internet for all.

Closing thoughts

If you’re considering involving your team in defining a team mission statement, it makes for a rewarding workshop activity. The five steps presented in this article give team members the opportunity to reflect on important foundational questions (what value do we bring?), while deepening mutual understanding.

Crafting a team mission statement was much less of an exercise in wordsmithing than I might have assumed. Instead, it was an exercise in aligning on the bigger questions of why we exist and who benefits from our work. I walked away with a better understanding of the value our team brings to Mozilla, a clearer way to articulate how our work ladders up to the organization’s mission, and a deeper appreciation for the individual perspectives of our team members.

The Mozilla BlogUnboxing AI with the next generation

Technology and Artificial Intelligence (AI) are just about everywhere, all the time — and that’s even more the case for the younger generation. We rely on apps, algorithms and chatbots to stay informed and connected. We work, study and entertain ourselves online. We check the news, learn about elections and stay informed during crises through screens. We monitor our health using smart devices and make choices based on the results and recommendations displayed back to us. Very few aspects of our lives evades digitization, and even those that remain analog require intention and mindfulness to keep it that way.

In this context and with the rise of AI and a growing generation of teens relying on the internet for learning, entertainment and socializing, now more than ever, it’s important to find ways to involve them in conversations about how technology influences their lives, their communities and the future of the planet.

Tech isn’t all or nothing — it’s something in between

The message adults used early on when it came to discussing technology with teens was “don’t.” Don’t spend too much time on your phone. Don’t play video games. Don’t use social media. This old-school method proved ineffective and, in some cases, counterproductive.

More recently, things have changed, and in some cases, adults have welcomed technology into their homes with open arms, sometimes without critique or caution. At the same time, schools have put great emphasis on training teens in web development and robotics in order to prepare them for the ever-changing job market. But before slow-motion running on a beach into a full embrace with technology, pause for a moment to reflect on the power of what we hold in our hands, and consider the profound ways in which it has reshaped our world.

It’s as if adults jumped between two extreme poles: from utter dystopia, to bright and shiny techno-solutionism, skipping the nuances in between. So, how can adults prepare to foster discussions with teens about technological ethics and the impacts of technology? How can teens be involved and engaged in conversations about their relationship to the technologies they use? We share the future with younger generations, so we need to involve them in these conversations in meaningful ways.

Teens don’t just want to have fun, they want to talk about what matters to them

At Tactical Tech, we’re always coming up with creative, sometimes unconventional ways to talk about technology and its impacts. We make public exhibitions in unexpected places. We create toolkits and guides in surprising formats. But to make interventions specifically for teens, we knew we needed to get them involved. We asked almost 300 international teens what matters to them, what they worry about and what they expect the future to be like. The results gave us goosebumps. Here are just a few quotes:

  • “I have a sad feeling that everything in the future will be online, including school.”
  • “Everyone is inside all the time, they’re not outside enjoying the world.”
  • “Loneliness is a problem. Online relationships aren’t real.”

But not all of their responses were so despairing. They also had dreams about technology being used to help us, such as through education and medical advancements.

  • “Internet can close the distance gap that currently exists. Since society is more connected, cultures are more accepted.”
  • “Nobody dies. Artificial intelligence has trained on their personalities, and we can digitally bring them back to life thanks to this data.”

We also asked 100 international educators what they needed in order to confidently talk to teens about technology. Educators felt positive about our creative, and non-judgmental approach. They encouraged us to make resources that are even more playful, to add more relatable examples and to make sure the points are as concrete and transferable as possible.

Breaching serious topics in fun and creative ways

With all that in mind, we co-created Everywhere, All The Time, a fun-yet-impactful learning experience for teens. Educators can use it to encourage young people to talk about technology, AI and how it affects them. Everywhere, All The Time, can be used to create a space where they can think about their relationship with technology and become inspired to make choices about the digital world they want to live in. This self-learning intervention includes a package with captivating posters and activities about:

  • Gaming and the attention economy
  • Our relationship to technology
  • How Large Language Models (LLMs) work
  • Algorithms in everyday life
  • The materiality of the internet
  • The invisible labor behind technology

The package, created by Tactical Tech’s youth initiative, What the Future Wants, includes provocative posters, engaging activity cards and a detailed guidebook that trains educators to facilitate these nuanced and sometimes delicate conversations.

Whether you work in a school, library, community center or want to hang it up at home, you can use the Everywhere, All The Time posters, activity cards and guidebook to start conversations with teens about the topics they care about. Now is the moment to come together and involve teens in these conversations.

Get Firefox

Get the browser that protects what’s important

The post Unboxing AI with the next generation appeared first on The Mozilla Blog.

The Mozilla BlogFinn Myrstad reflects on holding tech companies accountable and ensuring that human rights are respected

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Finn Lützow-Holm Myrstad, a true leader of development of better and more ethical digital policies and standards for the Norwegian Consumer Council. We talked with Finn about his work focusing on tech policy, the biggest concerns in 2024 and ways people can take action against those in power.

You started your journey to the work you do now when you were a little kid going to camps. What did that experience teach you that you appreciate now as an adult?

Since CISV International gathered children from around the world and emphasized team building and friendship, it gave me a more global perspective and made me realize how we need to work together to solve problems that concern us all. Now that the internet is spreading to all corners of the world and occupying a larger part of our lives, we also need to work together to solve the challenges this creates.

There’s a magnitude of subjects of concern in the tech policy space, especially considering how rapid tech is evolving in our world. Which area has you most concerned?

Our freedom to think and act freely is under pressure as increased power and information asymmetries put people at an unprecedented disadvantage, which is reinforced by deceptive design, addictive design and artificial intelligence. This makes all of us vulnerable in certain contexts. Vulnerabilities can be identified and reinforced by increased data collection, in combination with harmful design and surveillance-based advertising. Harm will probably be reinforced against groups who are already vulnerable.

What do you think are easy ways people can speak up and hold companies, politicians, etc. accountable that they might not even be thinking about?

This is not easy for the reason outlined above. Having said that, we all need to speak up. Talk to your local politicians and media about your concerns, and try to use alternative tech services when possible (for example: for messaging, browsing and web searches.) However, it is important to stress that many of the challenges need to be dealt with at the political and regulatory level.

What do you think is the biggest challenge we face in the world this year on and offline? How do we combat it?

I see the biggest challenges as interlocked with each other. For example, this year in 2024, at least 49 percent of people in the world are meant to hold national elections. The decrease in trust in democracy and public institutions is a threat to freedom and to solving existential problems such as the climate emergency. Technology can be a part of solving these problems. However, that lack of transparency and accountability, in combination with power concentrated in a few tech companies, algorithms that favor enraging content, and a large climate and resource footprint, are currently part of the problem and not the solution.

We need to hold companies to account and ensure that fundamental human rights are respected. 

<figcaption class="wp-element-caption">Finn Lützow-Holm Myrstad at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

What is one action that you think everyone should take to make the world and our lives online a little better?

Pause to think before you post is generally a good rule. If we all took a bit more care when posting and resharing content online, we could probably contribute to a less polarized discussion and maybe also get less addicted to our phones.

We started Rise25 to celebrate Mozilla’s 25th anniversary, what do you hope people are celebrating in the next 25 years?

That we managed to use the internet for positive change in the world, and that open internet is still alive.

What gives you hope about the future of our world?

That there is increased focus from all generations on the need for collective action. Together, I hope we can solve big challenges like the climate crisis and securing a free and open internet.

Get Firefox

Get the browser that protects what’s important

The post Finn Myrstad reflects on holding tech companies accountable and ensuring that human rights are respected appeared first on The Mozilla Blog.

Support.Mozilla.OrgFreshening up the Knowledge Base for spring 2024

Hello, SUMO community!

This spring we’re happy to announce that we’re refreshing the Mozilla Firefox Desktop and Mobile knowledge bases. This is a project that we’ve been working on for the past several months and now, we’re ready to finally share it with you all! We’ve put together a video to walk you through what these changes mean for SUMO and how they’ll impact you.

Introduction of Article Categories

When exploring our knowledge base, we realized there’s so many articles and it’s important to set expectations for users. We’ll be introducing three article types:

  • About – Article that aims to be educational and informs the reader about a certain feature.
  • How To – Article that aims to teach a user how to interact with a feature or complete a task.
  • Troubleshooting – Article that aims to provide solutions to an issue a user might encounter.
  • FAQ – Article that focuses on answering frequently asked questions that a user might have.

We will standardize titles and how articles are formatted per category, so users know what to expect when interacting with an article.

Downsizing and concentration of articles

There’s hundreds upon hundreds of articles in our knowledge base. However, many of them are repetitive and contain similar information. We want to reduce the number of articles and improve the quality of our content. We will be archiving articles and revising active articles throughout this refresh.

Style guideline update focus on reducing cognitive load

As mentioned in a previous post, we will be updating the style guideline and aiming to reduce the cognitive load on users by introducing new style guidelines like in-line images. There’s not huge changes, but we’ll go over them more when we release the updated style guidelines.

With all this coming up, we hope you join us for the community call today and learn more about the knowledge base refresh today. We hope to collaborate with our community to make this update successful.

Have questions or feedback? Drop us a message in this SUMO forum thread.

Mozilla ThunderbirdApril 2024 Community Office Hours: Rust and Exchange Support

Text "COMMUNITY OFFICE HOURS APRIL 2024: RUST AND EXCHANGE" with a stylized Thunderbird bird icon in shades of blue and a custom community icon Iin the center on a lavender background with abstract circular design elements.

We admit it. Thunderbird is getting a bit Rusty, but in a good way! In our monthly Development Digests, we’ve been updating the community about enabling Rust in Thunderbird to implement native support for Exchange. Now, we’d like to invite you for a chat with Team Thunderbird and the developers making this change possible. As always, send your questions in advance to officehours@thunderbird.net! This is a great way to get answers even if you can’t join live.

Be sure to note the change in day of the week and the UTC time. (At least the time changes are done for now!) We had to shift our calendar a bit to fit everyone’s schedules and time zones!

<figcaption class="wp-element-caption">UPDATE: Watch the entire conversation here. </figcaption>

April Office Hours: Rust and Exchange

This month’s topic is a new and exciting change to the core functionality: using Rust to natively support Microsoft Exchange. Join us and talk with the three key Thunderbird developers responsible for this shiny (rusty) new addition: Sean Burke, Ikey Doherty, and Brendan Abolivier! You’ll find out why we chose Rust, challenges we encountered, how we used Rust to interface with XPCOM and Necko to provide Exchange support. We’ll also give you a peek into some future plans around Rust.

Catch Up On Last Month’s Thunderbird Community Office Hours

While you’re thinking of questions to ask, watch last month’s office hours where we answered some of your frequently asked recent questions. You can watch clips of specific questions and answers on our TILvids channel. If you’d prefer a written summary, this blog post has you covered.

Join The Video Chat

We’ve also got a shiny new Big Blue Button room, thanks to KDE! We encourage everyone to check out their Get Involved page. We’re grateful for their support and to have an open source web conferencing solution for our community office hours.

Date and Time: Tuesday, April 23 at 16:00 UTC

Direct URL to Join: https://meet.thunderbird.net/b/hea-uex-usn-rb1

Access Code: 964573

The post April 2024 Community Office Hours: Rust and Exchange Support appeared first on The Thunderbird Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter — 125

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 125 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

General

New: Support for the “userAgent” capability

We added support for the User Agent capability which is returned with all the other capabilities by the new session commands. It is listed under the userAgent key and contains the default user-agent string of the browser. For instance when connecting to Firefox 125 (here on macos), the capabilities will contain a userAgent property such as:

"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:125.0) Gecko/20100101 Firefox/125.0"

WebDriver BiDi

New: Support for the “input.setFiles” command

The “input.setFiles” command is a new feature which allows clients to interact with <input> elements with type="file". As the name suggests, it can be used to set the list of files of such an input. The command expects three mandatory parameters. First the context parameter identifies the BrowsingContext (tab or window) where we expect to find an <input type="file">. Then element should be a sharedReference to this specific <input> element. Finally the files parameter should be a list (potentially empty) of strings which are the paths of the files to set for the <input>. This command has a null return value.

-> {
  "method": "input.setFiles",
  "params": {
    "context": "096fca46-5860-412b-8107-dae7a80ee412",
    "element": {
      "sharedId": "520c3e2b-6210-41da-8ae3-2c499ad66049"
    },
    "files": [
      "/Users/test/somefile.txt"
    ]
  },
  "id": 7
}
<- { "type": "success", "id": 7, "result": {} }

Note that providing more than one path in the files parameter is only supported for <input> elements with the multiple attribute set. Trying to send several paths to a regular <input> element will result in an error.

It’s also worth highlighting that the command will override the files which were previously set on the input. For instance providing an empty list as the files parameter will reset the input to have no file selected.

New: Support for the “storage.deleteCookies” command

In Firefox 124, we added two methods to interact with cookies: “storage.getCookies” and “storage.setCookie”. In Firefox 125 we are adding “storage.deleteCookies” so that you can remove previously created cookies. The parameters for the “deleteCookies” command are identical to the ones for the “getCookies” command: the filter argument allows to match cookies based on specific criteria and the partition argument allows to match cookies owned by a certain storage partition. All the cookies matching the provided parameters will be deleted. Similarly to “getCookies” and “setCookie”, “deleteCookies” will return the partitionKey which was built to retrieve the cookies.

# Assuming we have two cookies on fxfdx.dev, foo=value1 and bar=value2

-> {
  "method": "storage.deleteCookies",
  "params": {
    "filter": {
      "name": "foo",
      "domain": "fxdx.dev"
    }
  },
  "id": 8
}

<- { "type": "success", "id": 8, "result": { "partitionKey": {} } }

-> {
  "method": "storage.getCookies",
  "params": {
    "filter": {
      "domain": "fxdx.dev"
    }
  },
  "id": 9
}

<- {
  "type": "success",
  "id": 9,
  "result": {
    "cookies": [
      {
        "domain": "fxdx.dev",
        "httpOnly": false,
        "name": "bar",
        "path": "/",
        "sameSite": "none",
        "secure": false,
        "size": 9,
        "value": {
          "type": "string",
          "value": "value2"
        }
      }
    ],
    "partitionKey": {}
  }
}

New: Support for the “userContext” property in the “partition” argument

All storage commands accept a partition parameter to specify which storage partition it should use, whether it is to retrieve, create or delete cookie(s). Clients can now provide a userContext property in the partition parameter to build a partition key tied to a specific user context. As a reminder, user contexts are collections of browsing contexts sharing the same storage partition, and are implemented as Containers in Firefox.

-> { "method": "browser.createUserContext", "params": {}, "id": 8 }
<- { "type": "success", "id": 8, "result": { "userContext": "6ade5b81-ef5b-4669-83d6-8119c238a3f7" } }
-> {
  "method": "storage.setCookie",
  "params": {
    "cookie": {
      "name": "test",
      "value": {
        "type": "string",
        "value": "cookie in user context partition"
      },
      "domain": "fxdx.dev"
    },
    "partition": {
      "type": "storageKey",
      "userContext": "6ade5b81-ef5b-4669-83d6-8119c238a3f7"
    }
  },
  "id": 9
}

<- { "type": "success", "id": 9, "result": { "partitionKey": { "userContext": "6ade5b81-ef5b-4669-83d6-8119c238a3f7" } } }

Bug fixes

This Week In RustThis Week in Rust 543

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Rust Nation UK
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is venndb, an append-only memory DB whose tables can be build via a derive macro.

Thanks to Glen De Cauwsemaecker for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

430 pull requests were merged in the last week

Rust Compiler Performance Triage

A quiet week, with slightly more improvements than regressions. There were a few noise spikes, but other than that nothing too interesting.

Triage done by @Kobzol. Revision range: 86b603cd..ccfcd950b

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.3%, 1.4%] 9
Regressions ❌
(secondary)
0.4% [0.2%, 1.1%] 20
Improvements ✅
(primary)
-0.6% [-2.5%, -0.2%] 41
Improvements ✅
(secondary)
-0.8% [-1.4%, -0.2%] 4
All ❌✅ (primary) -0.4% [-2.5%, 1.4%] 50

1 Regressions, 3 Improvements, 6 Mixed; 5 of them in rollups 62 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust
New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2024-04-17 - 2024-05-15 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

There is absolutely no way I can imagine that Option is causing that error. That'd be like turning on the "Hide Taskbar" setting causing your GPU to catch fire.

[...]

If it's not any of those, consider an exorcist because your machine might be haunted.

Daniel Keep on rust-users

Thanks to Hayden Brown for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdTeam Thunderbird Answers Your Most Frequently Asked Questions

We know the Thunderbird community has LOTS of questions! We get them on Mozilla Support, Mastodon, and X.com (formerly Twitter). They pop up everywhere, from the Thunderbird subreddit to the teeming halls of conferences like FOSDEM and SCaLE. During our March Community Office Hours, we took your most frequently asked questions to Team Thunderbird and got some answers. If you couldn’t watch the full session, or would rather have the answers in abbreviated text clips, this post is for you!

Thunderbird for Android / K-9 Mail

The upcoming release on Android is definitely on everyone’s mind! We received lots of questions about this at our conference booths, so let’s answer them!

Will there be Exchange support for Thunderbird for Android?

Yes! Implementing Exchange in Rust in the Thunderbird Desktop client will enable us to reuse those Rust crates as shared libraries with the Mobile client. Stay up to date on Exchange support progress via our monthly Developer Digests.

Will Thunderbird Add-ons be available on Android?

Right now, no, they will not be available. K-9 Mail uses a different code base than Thunderbird Desktop. Thunderbird add-ons are designed for a desktop experience, not a mobile one. We want to have add-ons in the future, but this will likely not happen within the next two years.

When Thunderbird for Android launches, will it be available on F-Droid?

It absolutely will.

When Thunderbird for Android is ready to be released, what will the upgrade path look like?

We know some in the K-9 Mail community love their adorable robot dog and don’t want to give him up yet. So we will support K-9 Mail (same code, different brand) in parallel for a year or two, until the product is more mature, and we see that more K-9 Mail users are organically switching.

Because of Android security, users will need to manually migrate from K-9 Mail to Thunderbird for Android, versus an automatic migration. We want to make that effortless and unobtrusive, and the Sync feature using Mozilla accounts will be a large part of that. We are exploring one-tap migration tools that will prompt you to switch easily and keep all your data and settings – and your peace of mind.

Will CalDAV and CardDAV be available on Thunderbird for Android?

Probably! We’re still determining this, but we know our users like having their contacts and calendars inside one app for convenience, as well as out of privacy concerns. While it would be a lot of engineering effort, we understand the reasoning behind these requests. As we consider how to go forward, we’ll release all these explorations and ideas in our monthly updates, where people can give us feedback.

Will the K-9 Mail API provide the ability to download the save preferences that Sync stores locally to plug into automation like Ansible?

Yes! Sync is open source, so users can self-host their own instead of using Mozilla services. This question touches on the differences between data structure for desktop and mobile, and how they handle settings. So this will take a while, but once we have something stable in a beta release, we’ll have articles on how to hook up your own sync server and do your own automation.


Thunderbird for Desktop

When will we have native Exchange support for desktop Thunderbird?

We hope to land this in the next ESR (Extended Support Release), version 128, in limited capacity. Users will still need to use the OWL Add-on for all situations where the standard exchange web service is not available. We don’t yet know if native calendar and address book support will be included in the ESR. We want to support every aspect of Exchange, but there is a lot of code complexity and a history of changes from Microsoft. So our primary goal is good, stable support for email by default, and calendar and address book if possible, for the next ESR.

When will conversations and a true threaded view be added to Thunderbird?

Viewing your own sent emails is an important component of a true conversation view. This is a top priority and we’re actively working towards it. Unfortunately, this requires overhauling the backend database that underlies Thunderbird, which is 20 years old. Our legacy database is not built to handle conversation views with received and sent messages listed in the same thread. Restructuring a two decades old database is not easy. Our goal is to have a new global message database in place by May 31. If nothing has exploded, it should be much easier to enable conversation view in the front end.

When will we get a full sender name column with the raw email address of the sender? This will help further avoid phishing and spam.

We plan to make this available in the next ESR — Thunderbird 128 — which is due July 2024.

Will there ever be a browser-based view of Thunderbird?

Despite our foundations in Firefox, this is a huge effort that would have to be built from scratch. This isn’t on our roadmap and not in our plans for now. If there was a high demand, we might examine how feasible this could be. Alex explains this in more detail during the short video below:

The post Team Thunderbird Answers Your Most Frequently Asked Questions appeared first on The Thunderbird Blog.

Mozilla Performance BlogPerformance Testing Newsletter, Q1 Edition

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products. See below for highlights from the changes made in the last quarter.

Highlights

Contributors

  • Myeongjun Go [:myeongjun]

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

P.S. If you’re interested in including updates from your teams in a quarterly newsletter like this, and you are not currently covered by another newsletter, please reach out to me (:sparky). I’m interested in making a more general newsletter for these.

Will Kahn-GreeneObservability Team Newsletter (2024q1)

Observability Team is a team dedicated to the problem domain and discipline of Observability at Mozilla.

We own, manage, and support monitoring infrastructure and tools supporting Mozilla products and services. Currently this includes Sentry and crash ingestion related services (Crash Stats (Socorro), Mozilla Symbols Server (Tecken), and Mozilla Symbolication Service (Eliot)).

In 2024, we'll be working with SRE to take over other monitoring services they are currently supporting like New Relic, InfluxDB/Grafana, and others.

This newsletter covers an overview of 2024q1. Please forward it to interested readers.

Highlights

  • 🤹 Observability Services: Change in user support

  • 🏆 Sentry: Change in ownership

  • ‼️ Sentry: Please don't start new trials

  • ⏲️ Sentry: Cron monitoring trial ending April 30th

  • ⏱️ Sentry: Performance monitoring pilot

  • 🤖 Socorro: Improvements to Fenix support

  • 🐛 Socorro: Support guard page access information

See details below.

Blog posts

None this quarter.

Detailed project updates

Observability Services: Change in user support

We overhauled our pages in Confluence, started an #obs-help Slack channel, created a new Jira OBSHELP project, built out a support rotation, and leveled up our ability to do support for Observability-owned services.

See our User Support Confluence page for:

  • where to get user support

  • documentation for common tasks (get protected data access, create a Sentry team, etc)

  • self-serve instructions

Hop in #obs-help in Slack to ask for service support, help with monitoring problems, and advice.

Sentry: Change in ownership

The Observability team now owns Sentry service at Mozilla!

We successfully completed Phase 1 of the transition in Q1. If you're a member of the Mozilla Sentry organization, you should have received a separate email about this to the sentry-users Google group.

We've overhauled Sentry user support documentation to improve it in a few ways:

  • easier to find "how to" articles for common tasks

  • best practices to help you set up and configure Sentry for your project needs

Check out our Sentry user guide.

There's still a lot that we're figuring out, so we appreciate your patience and cooperation.

Sentry: Please don't start new trials

Sentry sends marketing and promotional emails to Sentry users which often include links to start a new trial. Please contact us before starting any new feature trials in Sentry.

Starting new trials may prevent us from trialing those features in the future when we’re in a better position to evaluate the feature. There's no way for admins to prevent users from starting a trial.

Sentry: Cron monitoring trial ending April 30th

The Cron Monitoring trial that was started a couple of months ago will end April 30th.

Based on feedback so far and other factors, we will not be enabling this feature once the trial ends.

This is a good reminder to build in redundancy in your monitoring systems. Don't rely solely on trial or pilot features for mission critical information!

Once the trial is over, we'll put together an evaluation summary.

Sentry: Performance monitoring pilot

Performance Monitoring is being piloted by a couple of teams; it is not currently available for general use.

In the meantime, if you are not one of these pilot teams, please do not use Performance Monitoring. There is a shared transaction event quota for the entire Mozilla Sentry organization. Once we hit that quota, events are dumped.

If you have questions about any of this, please reach out.

Once the trial is over, we'll put together an evaluation summary.

Socorro: Improvements to Fenix support

We worked on improvements to crash ingestion and the Crash Stats site for the Fenix project:

1812771: Fenix crash reporter's Socorro crash reports for Java exceptions have "Platform" = "Unknown" instead of "Android"

Previously, the platform would be "Unknown". Now the platform for Fenix crash reports is "Android". Further, the platform_pretty_version includes the Android ABI version.

/images/obs_2024q1_android_version.thumbnail.png <figcaption>

Figure 1: Screenshot of Crash Stats Super Search results showing Android versions for crash reports.

</figcaption>

1819628: reject crash reports for unsupported Fenix forks

Forks of Fenix outside of our control periodically send large swaths of crash reports to Socorro. When these sudden spikes happened, Mozillians would spend time looking into them only to discover they're not related to our code or our users. This is a waste of our time and resources.

We implemented support for the Android_PackageName crash annotation and added a throttle rule to the collector to drop crash reports from any non-Mozilla releases of Fenix.

From 2024-01-18 to 2024-03-31, Socorro accepted 2,072,785 Fenix crash reports for processing and rejected 37,483 unhelpful crash reports with this new rule. That's roughly 1.7%. That's not a huge amount, but because they sometimes come in bursts with the same signature, they show up in Top Crashers wasting investigation time.

1884041: fix create-a-bug links to work with java_exception

A long time ago, in an age partially forgotten, Fenix crash reports from a crash in Java code would send a crash report with a JavaStackTrace crash annotation. This crash annotation was a string representation of the Java exception. As such, it was difficult-to-impossible to parse reliably.

In 2020, Roger Yang and Will Kahn-Greene spec'd out a new JavaException crash annotation. The value is a JSON-encoded structure mirroring what Sentry uses for exception information. This structure provides more information than the JavaStackTrace crash annotation did and is much easier to work with because we don't have to parse it first.

Between 2020 and now, we have been transitioning from crash reports that only contained a JavaStackTrace to crash reports that contained both a JavaStackTrace and a JavaException. Once all Fenix crash reports from crashes in Java code contained a JavaException, we could transition Socorro code to use the JavaException value for Crash Stats views, signature generation, generate-create-bug-url, and other things.

Recently, Fenix dropped the JavaStackTrace crash annotation. However, we hadn't yet gotten to updating Socorro code to use--and prefer--the JavaException values. This broke the ability to generate a bug for a Fenix crash with the needed data added to the bug description. Work on bug 1884041 fixed that.

Comments for Fenix Java crash reports went from:

Crash report: https://crash-stats.mozilla.org/report/index/eb6f852b-4656-4cf5-8350-fd91a0240408

to:

Crash report: https://crash-stats.mozilla.org/report/index/eb6f852b-4656-4cf5-8350-fd91a0240408

Top 10 frames:

0  android.database.sqlite.SQLiteConnection  nativePrepareStatement  SQLiteConnection.java:-2
1  android.database.sqlite.SQLiteConnection  acquirePreparedStatement  SQLiteConnection.java:939
2  android.database.sqlite.SQLiteConnection  executeForString  SQLiteConnection.java:684
3  android.database.sqlite.SQLiteConnection  setJournalMode  SQLiteConnection.java:369
4  android.database.sqlite.SQLiteConnection  setWalModeFromConfiguration  SQLiteConnection.java:299
5  android.database.sqlite.SQLiteConnection  open  SQLiteConnection.java:218
6  android.database.sqlite.SQLiteConnection  open  SQLiteConnection.java:196
7  android.database.sqlite.SQLiteConnectionPool  openConnectionLocked  SQLiteConnectionPool.java:503
8  android.database.sqlite.SQLiteConnectionPool  open  SQLiteConnectionPool.java:204
9  android.database.sqlite.SQLiteConnectionPool  open  SQLiteConnectionPool.java:196

This both fixes the bug and also vastly improves the bug comments from what we were previously doing with JavaStackTrace.

Between 2024-03-31 and 2024-04-06, there were 158,729 Fenix crash reports processed. Of those, 15,556 have the circumstances affected by this bug: a JavaException but don't have a JavaStackTrace. That's roughly 10% of incoming Fenix crash reports.

While working on this, we refactored the code that generates these crash report bugs, so it's in a separate module that's easier to copy and use in external systems in case others want to generate bug comments from processed crash data.

Further, we changed the code so that instead of dropping arguments in function signatures, it now truncates them at 80 characters.

We're hoping to improve signature generation for Java crashes using JavaException values in 2024q2. That work is tracked in bug #1541120.

Socorro: Support guard page access information

1830954: Expose crashes which were likely accessing a guard page

We updated the stackwalker to pick up the changes for determining is_likely_guard_page. Then we exposed that in crash reports in the has_guard_page_access field. We added this field to the Details tab in crash reports and made it searchable. We also added this to the signature report.

This helps us know if a crash is possibly due to a bug with memory access that could be a possible security vulnerability vector--something we want to prioritize fixing.

Since this field is security sensitive, it requires protected data access to view and search with.

Socorro misc

Tecken/Eliot misc

  • Maintenance and documentation improvements.

  • 5 production deploys. Created 21 issues. Resolved 28 issues.

More information

Find us:

Thank you for reading!

The Mozilla BlogOpen Source in the Age of LLMs

(To read the complete Mozilla.ai publication featuring all our OSS contributions, please visit the Mozilla.ai blog)

Like our parent company, Mozilla.ai’s founding story is rooted in open-source principles and community collaboration. Since our start last year, our key focus has been exploring state-of-the-art methods for evaluating and fine-tuning large-language models (LLMs).

Throughout this process, we’ve been diving into the open-source ecosystem around LLMs.  What we’ve found is an electric environment where everyone is building. As Nathan Lambert writes in his post, “It’s 2024, and they just want to learn.”

“While everything is on track across multiple communities, that also unlocks the ability for people to tap into excitement and energy that they’ve never experienced in their career (and maybe lives).”

The energy in the space, with new model releases every day, is made even more exciting by the promise of open source where, as I’ve observed before, anyone can make a contribution and have it be meaningful regardless of credentials, and there are plenty of contributions to be made. If the fundamental question of the web is, “Why wasn’t I consulted,” open-source in machine learning today offers the answer, “You are as long as you can productively contribute PRs, come have a seat at the table.”

Even though some of us have been active in open-source work for some time, building and contributing to it at a team and company level is a qualitatively different and rewarding feeling. And it’s been especially fun watching upstream make its way into both the communities and our own projects. 

At a high level, here’s what we’ve learned about the process of successful open-source contributions: 

1. Start small when you’re starting with a new project. If you’re contributing to a new project for the first time, it takes time to understand the project’s norms: how fast they review, who the key people are, their preferences for communication, code review style, build systems, and more. It’s like starting a new job entirely from scratch.

Be gentle with both yourself and the reviewers and pick something like a documentation task, or a “good first issue” label just to get a feel for how things work.

2. Be easy to work with. There are specific norms around working with open source, and they closely follow this fantastic post of understanding how to be an effective developer – “As a developer you have two jobs: to write code, and be easy to work with.”

In open source, being easy to work with means different things to different people, but I generally see it as:

a. Submitting clean PRs with working code that passes tests or gets as close as possible. No one wants to fix your build. 

b. Making small code changes by yourself, and proposing larger architecture changes in a group before getting them down in code for approval. Asking “What do you think about this?” Always try to also propose a solution instead of posing more problems to maintainers: they are busy!

c. Write unit tests if you’re adding a significant feature, where significant is anything more than a single line of code. 

d. Remembering Chesterton’s fence: that code is there for a reason, study it before you suggest removing it. 

3. Assume good intent, but make intent explicit. When you’re working with people in writing, asynchronously, potentially in other countries or timezones, it’s extremely easy for context, tone, and intent to get lost in translation. Implicit knowledge becomes rife.  Assume people are doing the best they can with what they have, and if you don’t understand something, ask about it first.

4. The AI ecosystem moves quickly. Extremely quickly. New models come out every day and are implemented in downstream modules by tomorrow. Make sure you’re ok with this speed and match the pace. Something you can do before you do PRs is to follow issues on the repo, and follow the repo itself so you get a sense for how quickly things move/are approved. If you’re into fast-moving projects, jump in. Otherwise, pick one that moves at a slower cadence.

5. The LLM ecosystem is currently bifurcated between HuggingFace and OpenAI compatibility: An interesting pattern has developed in my development work on open-source in LLMs. It’s become clear to me that, in this new space of developer tooling around transformer-style language models at an industrial scale, you are generally conforming to be downstream of one of two interfaces:

a. models that are trained and hosted using HuggingFace libraries and particularly the HuggingFace hub as infrastructure.

b. Models that are available via API endpoints, particularly as hosted by OpenAI.

If you want to be successful in this space today, you as a library or service provider have to be able to interface with both of these. 

6. Sunshine is the best disinfectant. As the recent xz issue proved, open code is better code and issues get fixed more quickly. This means, don’t be afraid to work out in the open. All code has bugs, even yours and mine, and discovering those bugs is a natural process of learning and developing better code rather than a personal failing. 

We’re looking forward to both continuing our contributions, upstreaming them and learning from them as we continue our product development work. 

Read the whole publication and subscribe to future ones on the Mozilla.ai blog.

Get Firefox

Get the browser that protects what’s important

The post Open Source in the Age of LLMs appeared first on The Mozilla Blog.

Firefox NightlyExploring improvements to the Firefox sidebar

What are we working on? 

We have long been excited to improve the existing Firefox sidebar and strengthen productivity use cases in the browser. We are laying the groundwork for these improvements, and you may have seen early work-in-progress in our test builds and in Nightly behind preferences (Firefox Nightly with vertical tabs and Firefox is experimenting with a sidebar in Nightly).

What to expect next?

In the near future, we will be landing foundational sidebar features in Nightly to ensure parity with the existing sidebar and make the new experience more useful and easy to use. Many of the ideas we are exploring are based on your suggestions in Mozilla Connect. You’ve shared how you imagine productivity, switching between contexts, and juggling multiple tasks could improve in Firefox, and we’ve listened.

We are encouraged by your positive feedback on our early concepts, and we look forward to engaging with the community and hearing more about what you think once sidebar features are ready for testing. We will announce feature readiness for feedback in the follow-up blog posts and on Connect.

In the meantime, if you have questions or general feedback, please engage with us on Mozilla Connect.

Firefox NightlyCustomizing Reader Mode – These Weeks in Firefox: Issue 158

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed a regression introduced in Firefox 116 for extensions including a sidebar command shortcut (fixed in Nightly 126 and Beta 125) – Bug 1881820
    • Thanks to Dao for investigating the regression and then following up with a fix
  • Fixed a long standing uninterruptible reflow triggered by browser-addons.js on handling “addon-install-confirmation” notifications – Bug 1360028
    • Thanks to Dao for fixing it too!

ESMification status

  • 100%
    • Thank you to everyone that has worked on and supported this effort.
    • Plan for out-of-tree changes & removing old loaders is coming soon.

Lint, Docs and Workflow

Migration Improvements

Screenshots (enabled by default in Nightly)

Search and Navigation

  • Firefox Suggest experience
  • Work continues integrating the new Rust backend and improving exposure metrics.
  • Daisuke has exposed icons mime-types along with blobs, from the offline Suggest backend. Bug 1882967
  • Drew has fixed a problem with the recording of exposure metrics in experiments. Bug 1886175
  • Clipboard result
  • Karandeep has fixed a problem with empty searches returning no results in Tab, History and Bookmarks Search Mode, and a problem with the clipboard result persisting when switching through multiple empty tabs. Bug 1884094, Bug 1865336
  • SERP categorization metrics
  • Stephanie and James have fixed multiple issues in this area.
  • Categorization metric has been enabled in Nightly.
  • Search Config v2
    • Standard8 and Mandy have fixed multiple issues in this area.
    • Work continues as Config v2 has been enabled in Nightly.
  • Frecency ranking
    • Marco has changed frecency recalculation to accelerate when many changes have been made from the last recalculation. This should help with large imports. Bug 1873629
    • Marco has corrected a schema migration mistake, preventing recalculation of frecency for not recently accessed domains. That caused autofill of domains to not work as expected in the Address Bar for Firefox 125 Nightly (and first week of Beta). Bug 1886975
  • Other fixes
    • Drew has corrected visual alignment of weather results. Bug 1886694
    • Dale has corrected visual alignment of rich search suggestions. Bug 1871022

Storybook/Reusable Components

  • Design Tokens
    • We’ve recently landed some changes to how our design tokens are handed in mozilla-central, we now have a JSON source of truth for these tokens
    • To update the tokens files (tokens-shared, tokens-platform, tokens-brand), you’ll need to modify the design-tokens.json file and then run ./mach npm run build –prefix=toolkit/themes/shared/design-system
    • Our current docs can be found on Storybook: JSON design tokens, and the more general design tokens docs
      • Porting these docs over to Firefox Source Docs will happen in the next couple of days
    • This info will also be sent out to the firefox-dev mailing list with more details and links to Firefox Source Docs

The Mozilla BlogMarek Tuszynski reflects on curating thought-provoking experiences at the intersection of technology and activism

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Marek Tuszynski, an artist and curator that is the Executive Director and co-founder of Tactical Tech, a program dedicated to supporting initiatives focused on promoting better privacy and digital rights. We talked with Marek about his travels abroad shaping his career, what sparks his inspiration and future challenges we face online.

So the first question that I had that I wanted to ask you about this is very straightforward, but what initially inspired you to co-found Tactical Tech? What was the first thing that made you really want to start the work that you do?

Marek Tuszynski: Things were not looked at the ways that I thought it’s important to look at them, especially technical things that have been around for over 20 years now. But the truth is, it was actually more excitement. Excitement that there’s such a massive opportunity, such a massive chance for many different people — actors, places, nations — to do things different and outside of the constraints that they are in, where they are at the time, and so on. I worked internationally at the time in sub-Saharan Africa and Southeast Asia and was just thinking that a lot of tech is being damned there in these places — I’m also from Eastern Europe, so we’ve never seen the leading edge of the technology, we’ve seen the tail if we’re lucky if the censorship allowed. So I just wanted to use technology as not only an opportunity that I have been kind of given by chance — it just happened I was in the right place in the right time — but also to bring it, and to co-develop it and do things together with people. My focus early on was with open source with software, and that was the driving force behind how we think about which technology is better for society, which is a more right one, which gives you more freedoms, not restrict them, and so on, etc. The story now, as we know, it turned kind of dark, but the inspiration was fascination with the possibilities of technology in terms of the tool for how we can access knowledge and information, the tool for how we can refine the way we see the world. It’s still there, and I think was there at the very beginning, and I think that’s a major force.

You mentioned some of the traveling that you did. I’m very much of the belief that traveling is one of the best ways for us to learn. How much did the traveling and the places that you’ve been to and that you’ve seen influence or impact the work that you did and give you a bigger perspective on some of the things that you wanted to do?

That’s interesting because I come from the art background and one of my fascinations was a period in history of ours where some of the makers spent some of their time when they were learning on traveling. To go and visit other artists or studios, vendors and makers, etc., to see how they do things because there was no other way of information to travel, and for me that was very important. But I think you are right. It is not cool to talk about traveling these days, because travel also means burning fossil fuels and all this kind of stuff — which is unfortunately true, but you can do other travel, I’m doing a lot of sailing these days. But the initial traveling for me, coming from an entirely restricted part of the world and where I was growing up, I actually thought, “I’m never going to travel” in my mind. This is like something I’m going to read in the books, see on films, but never experience because I can’t get a passport. I couldn’t get the basic rights to leave the border that I was constrained with. So the moment I could do that I did it, and I practically never come back. I’ve left the place I’m from 35 years ago.

The question that you’re asking is very important because travel teaches you a lot of things, and you become much more humble. Your eyes and brain and heart opens up much more. And you see that the world is unique, that people around you are unique — you’re not that unique yourself and there’s a lot to learn from that. Initially, when you travel, you have this arrogance — I had this arrogance at least —  that I’m going to go to places, learn them, understand them, and turn into something, etc. And then you go, and you learn, and you never stop. Learning is endless, and that’s the most fascinating part. But I think that the biggest privilege was to meet people and meet them on their own ground with their own ideas about everything. And then you confront yourself and rip the way you frame things because you just come from a place with ideas that somebody else put into your education system. So for me, yeah, it (traveling) worked in a way. And it’s not necessarily the travel in the physical space that we need. Now you can do it virtually like we have the conversation, and you get to know people and, you travel in some way. You may not see the actual architecture of the space you are in, but you are seeing “Okay, this is a person coming from somewhere with a certain set of ideas, questions, and so on, etc.” You look at them and think how we can have the conversation knowing that we come from very different places. Travel gives you that. 

When it gets to like some of the ideation process, what sparks the inspiration for the experiences you try to create? Is there any research or data that you look at? Are there just trends that you look at to get inspired by? What kind of like starts that? 

There’s a mixture. I think we’re probably the least analyzing organization. Even if it’s a trend or some kind of mainstream stuff with the social media or all media, etc., it’s too late. I think for me personally, it’s always observing what’s being talked about. With AI, it’s what aspects of AI actually are being totally omitted. There’s a lot of focus now, for example, on elections and the visible impact of the AI, so how we can amplify this information, confusion and basically deteriorate trust into what we see, what we hear, what we read. And it’s super important. People are going to learn that very quickly. 

I think what is happening is the invisible part, where businesses that are interested in influencing political or non-political opinions around issues that are critical for people will be using AI for analyzing the data. For hyper fast profiling of people in much more clever ways of addressing them with more clever advertisements, in that it won’t necessarily be paid — it doesn’t have to be. Or even how you design certain strategies, etc. that can be augmented by how you use AI. And I think this is where I will be focusing on rather than talking about that the deepfakes which we’ve done already. But to the core of the question you asked about how we have topics that we focus on, from the first day of Tactical Tech, we’ve always based our work on partners, collaborators and people that we work with — and often that we are invited to work with like a group of people or institutional organizations, etc. And good listening is for me part of the critical research. So instead of coming in a view and ideas of research questions, we listen to what questions are already there. Where the curiosity is, where the fears are, where the hopes are, and so on, and sometimes start to build backwards toward and think about what you can bring from the position you occupy. 

Is there one collaboration or organization that you’ve worked with before that you felt was really impactful that you’re the most proud of? 

There are hundreds of those (laughs). We had this group of people from Tajikistan — many people don’t know what Tajikistan is. And they were amazing. They were basically like a sponge, just sucking everything in and trying to engage with the culture cross between us and them and everybody else and people from probably other countries and so on — it was massive. But everybody was positive and trying to figure out some kind of common language that is bizarre English being mixed with some other languages, and so on. And that first encounter turned into a friendship and collaboration that led them to be the key people that brought the whole open source tool to the school system in Tajikistan, and so on, etc. I think that was one of these examples where it was very positive, encouraging. 

We do a lot of collaboration. So we have hundreds of partners now, especially on our educational work. And I think the most successful series was with a number of groups in Brazil that we collaborated with because they really took the content and the collaboration to the next level and took ownership of that. And for me, the ideal scenario over the work we do is that we may be developing something on the stage together, but there should be a moment when we disappear from the stage and somebody else take that stage. And that’s what happened. And that’s just beautiful to see. And when somebody tells you about this product that is amazing, and they don’t even know that you were part of that, that is the best compliment you can get. 

<figcaption class="wp-element-caption">Marek Tuszynski at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

What do you think is the biggest challenge we face in the world this year on and offline? How do we combat it?

I think people have different challenges in different places. The first thing that we are going to launch is a series of election influence situation rooms, which is just kind of a creative space that has the front of the house and back of the house that you would like to mount around some of the elections that are happening — like U.S. or European Parliament, or many other elections. What we would like to showcase and kind of demystify is not only the way we elect people, but how much the entire system works or doesn’t work. 

And what we’re trying to illustrate with this project “the situation rooms” is to unpack it and show how important, for a lot of entities, confusion is. How important polarization is, how important the lack of trust towards specific formats of communication are, or how institutions fail, because in this environment it is much easier to put forward everything from conspiracy theories to fake information, but it also makes people frustrated. It makes people to be much more black and white and aligning themselves with things that they would not align with earlier. I think this is the major thing that we are focusing on this year. So the set of elections and how we can get people engaged without any paranoia and any kind of dystopian way of thinking about it. It helps ourselves in a way to build apparatus for recognizing this situation we are in. And then let’s build some methods for what we need to know, to understand what is happening and then how that can be useful for democratic person elections, but how also it can be extremely harmful. 

What do you think is one action that everyone should take to make the world and our lives online a little bit better?

Technology is the bridge. Technologies open the channel of communications, so on, etc. Use technology for that. Don’t focus on yourself as much as you can focus on the world and other people. And if technology can help you to understand where they’re coming from, what the needs are and what kind of role you can play from whatever level of privilege you may have, use it. And the fact that you may use better technologies and some people have access to it already, that’s a certain kind of privilege that I think we should be able to share widely. 

You use technology to engage other people who don’t want to engage. Who lost trust. Don’t give up on them. Don’t give up on people on the other side, those voting for people you don’t have any respect. They are lost. And part of the reason they are lost is the technology they’re using.

How it is challenging the information and making them believe in things that they shouldn’t be believing in and that they probably wouldn’t if that technology didn’t happen. I think we passed this kind of libertarian way of using technology for individual good that’s going to turn everything into a better world. We have proven itself wrong many times by now.

We started Rise25 to celebrate Mozilla’s 25th anniversary, what do you hope people are celebrating in the next 25 years?

I think Rise 25 was fantastic because it was nice to see all the kinds of people. The 25 people who are so different in such different ways thinking about tech and ideas, and so on, etc., and so diverse in many ways that you don’t see very often. And it gave people the space to actually vocalize what they think and so on. And that was definitely unique. So I think Mozilla plays a specific role in this kind of in-between sector thing where it’s a corporation in a city in San Francisco that builds tools, but it’s also a foundation. 

Mozilla has unique access to people around the world that do a lot of creative work around technology. And I think that should be celebrated. I don’t think there’s enough of that. Usually people are celebrated for what they achieve with technology, usually making a lot of money, and so on, etc., there’s very little to talk about the technology that has positive impact on society in the world. And I think Mozilla should be touching and showcasing that, and I think there are plenty of things that can be celebrated. 

What gives you hope about the future of our world?

You know, it’s going to be okay. Don’t worry. But we are going to see people that’ll be happy and there will be people that separate. I think the more we can do now for the next generations of people that are coming up now, the better. And if you’re lucky to live long, you may feel proud for what you’ve been doing — focus on that, rather than imagining or picturing some future machine. 

Get Firefox

Get the browser that protects what’s important

The post Marek Tuszynski reflects on curating thought-provoking experiences at the intersection of technology and activism appeared first on The Mozilla Blog.

The Servo BlogServo and SpiderMonkey

As a web engine, Servo embeds another engine for its script execution capabilities, including both JavaScript and Wasm: SpiderMonkey. One of the goals of Servo is modularity, and the question of how modular it really was with regards to those capabilities came up. For example, how easy would it be for Servo to use Chrome’s V8 engine, or the next big script engine? To answer that question, we’ve written a short report analysing the relationship between Servo and SpiderMonkey.

The problem

Running a webpage happens inside the script component of Servo; the loading process starts there, and the page continues to run its HTML event loop there. By its very nature, executing a script from within a webpage requires an integration between the script engine and the web engine that surrounds it. Anything shared between the two, including the DOM itself and any other construct calling from one into the other, needs to be integrated somehow, and much but not all of that is done via WebIDL. For example, an integration area that is left for web and script engines to implement as they see fit is that with a garbage collector (see example in Rust for SpiderMonkey).

The need to integrate can result in tight coupling, but the classic ways of increasing modularity — abstractions and interfaces — can be applied here as well, and that is where we found Servo lacking in some ways, but also on the right path. Servo already comes with abstractions and interfaces for a large surface area of its integration with SpiderMonkey, providing ease of use and clarity while preserving boundaries between the two. Other parts of that integration rely on direct, and unsafe, calls into the low-level SpiderMonkey APIs.

The solution

The low-hanging fruit consists of removing these direct calls into low-level SpiderMonkey APIs, replacing them with safe and higher-level constructs. Work on this has started, through a combination of efforts from maintainers and the enthusiasm of community members: eri, tannal, and Taym Haddadi. These efforts have already resulted in the closing of several issues:

Note that the safer higher-level constructs that replace low-level SpiderMonkey API calls are still internally tightly coupled to SpiderMonkey. By centralizing these calls, and hiding them from the rest of the codebase, it becomes possible to enumerate what exactly Servo is doing with SpiderMonkey, and to start thinking about a second layer of abstraction: one that would hide the underlying script engine. An existing, and encouraging, example of such a layer comes from React Native in the form of its JavaScript Interface (JSI).

Call to action

If you are interested in contributing to these efforts, the issues below are good places to start:

For more details, read the full report on our wiki.

Don Martiplanning for SCALE 2025

I missed Southern California Linux Expo this year. Normally I can think of a talk to do, but between work and [virus redacted] I didn’t have a lot of conference abstract writing time last fall. I need some new material anyway. The talks that tend to do well for me there are kind of a mix of tips for doing weird stuff.

I didn’t really have anything good to submit last fall, but this year I am building up a bunch of miscellaneous Linux stuff similar to what has worked for me at SCALE before. Because of the big Fediverse trend, the search quality crisis, the ends of third-party cookies and Twitter, and enshittification in general, it seems like there’s a lot more interest in redoing your blog—I know I have been doing it, so that’s what I’m going to see if I can come up with something on for next SCALE. But I’m not going to use a blog software package. I’m more comfortable with a mix of different stuff. This blog is now mainly done in Pandoc, auto-rebuilt by Make, and has a bunch of scripts in various languages, including shell, Perl, Python, and even a little bit of Lua now.

protip: use cowsay(1) to alert the user to errors in Makefile before restarting<figcaption>protip: use cowsay(1) to alert the user to errors in Makefile before restarting</figcaption>

I don’t really expect anybody to copy this blog, more outdo it by getting the audience to realize how much you can now do with the available tools. I’m not going to win any design prizes but with modern CSS I can make a reasonable responsive layout and dark/light modes. And yes you can make a valid RSS feed in GNU Make.

The feature I just did today is the similar posts in the left column. Remember that paper about how you can measure the similarity between two pieces of text by seeing how well they compress together? “Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors - ACL Anthology This is Python code for rating similarity of chunks of text. Check it out in the left column, you can now follow the links to similar blog posts.

import gzip def z(s): return len(gzip.compress(bytes(s, 'utf-8'))) def simscore(t1, t2): "lower is better" if len(t1) == 0 or len(t2) == 0: return 1 base = z(t1) + z(t2) minsize = min(z(' '.join([t1, t2])), z(' '.join([t2, t1])), base) return int(10000 * minsize/base)

Next I will probably try stuff like Fediverse-powered comments, some kind of search feature, LLM training set poisoning, some privacy and p2p features, and maybe something else. A lot of what I’m doing here will be possible to translate into other environments, and should be portable to people’s favorite blog software.

Related

Am I metal yet? The old blog software

Automatically run make when a file changes

Hey kids, favicon!

Bonus links

Notes on git’s error messages

Verified curl

Ideas for my dream blogging CMS

German state moving 30,000 PCs to LibreOffice

xz, Tidelift, and paying the maintainers

[$] Radicle: peer-to-peer collaboration with Git

An HTML Switch Control

A (tiny, incomplete, single user, write-only) ActivityPub server in PHP

Popular git config options

Don MartiB L O C K in the U S A

According to a survey done by Censuswide for Ghostery, a majority of Americans now use ad blockers. Yes, it looks like a well-designed survey of 2,000 people. But it’s hard to go from what people say they’re using to figuring out how much protection they really have.

  • Are they answering accurately? People might be under- or over-reporting their use of ad blockers. Under-reporting because they don’t want to admit to free-riding on ad-supported sites, or over-reporting because install an ad blocker is now one of the typical Internet tips you’re supposed to do, like not re-using passwords and installing software updates when they come out. People might be trying to look more responsible. When the FBI says you should be running an ad blocker to deal with fake search ads, that puts a certain amount of pressure on people.icymi: Ad Blockers and the Four Currencies by Lars Doucet.

  • Are they using an honest blocker with real protection? The ad blocking category has a lot of scams, including adware and paid allow-listing, so most of the people saying yes are not getting the blocking they think they are. (The company that owns the number one ad blocker makes a business out of selling exceptions from blocking. Senator Ron Wyden wrote a letter to the FTC asking them to investigate the ad-blocking industry back in 2020, but no action as far as I know. In the meantime you can check your ad blocker using a tool from the EFF.)

  • How much of their browsing is on a protected browser or device? It’s a lot easier to install an ad blocker on desktop than on mobile, and people have different habits.

  • Is protection being circumvented by server-to-server tracking? Ad blocking has been a thing for a long time, so the surveillance industry has gotten pretty good at working around it. Facebook has responded to Apple ATT and to blockage of their tracking pixels by rolling out server-to-server tracking, which avoids any protection on the client. Google and other companies also have server-to-server tracking.

The second most newsworthy part of the new Censuswide survey is why people say they’re using an ad blocker. Protect online privacy is now the number one reason, with block ads and speed up page loads coming in after that. I’ll leave the most newsworthy part to the end. I know, I know, the surveillance advertising people are going to reply with something like, yeah, right, these ad blocker users are just rationalizing free-riding on ad-supported sites, like Napster users making bogus fair use arguments instead of paying for CDs back when that was a thing. In order to understand this survey we have to put it in context with other research. Compare to Turow et al. on attitudes to cross-context tracking, and to an IAB Europe study that found only 20% of users would be happy for their data to be shared with third parties for advertising purposes.

It looks like the privacy concerns are real for a significant subset of people, and part of the same trend as popular US State Privacy Legislation. Different people have different norms around ad personalization, and if people can’t get companies to comply with those norms they will get the government to do something about it. For companies, adjusting to privacy norms doesn’t just mean throwing privacy-enhancing technologies (PETs) at the problem. Jerath et al. found similar levels of perceived privacy violations for on-device ad personalization as for old-fashioned cookie-based tracking. PETs have different mathematical properties from cookies, but either don’t address other problems or make them worse.

Companies deploying PETs are asking users to trust that they will do complicated math honestly—but they’re not starting from a position of trust. When users have the opportunity to evaluate the companies’ honesty in a way they do understand, the companies don’t measure up. Most people can look at an online map of their neighborhood and spot places where a locksmith isn’t. And it’s easy to look up a person on a social site and see where there are enough profiles that not all of them can be real.

screenshot of several fake Facebook profiles, all using the same two photos of retired US Army General Mark Hertling<figcaption>screenshot of several fake Facebook profiles, all using the same two photos of retired US Army General Mark Hertling</figcaption>

The biggest problem with PETs will be that the Big Tech companies do both easy-to-understand activities—like scams, fake profiles, and union bustingand hard-to-understand activities, like PET math. I see you served me scam ads and a map with fake companies in my neighborhood, but I totally trust your math to protect my privacy — no one ever If you don’t know if the PET math is honest, but you can see the same company acting dishonestly in other ways, then it’s hard to trust the PET math. (Personally I think the actual PETs are probably legit, but they’re being rolled out as part of a larger program to squeeze out legit publishers and other smaller companies.)

In AIC polls, confidence in Amazon, Meta, and Google has fallen since 2018.<figcaption>In AIC polls, confidence in Amazon, Meta, and Google has fallen since 2018.</figcaption>

Maybe trust issues are behind Censuswide’s most newsworthy data point: experienced advertisers (with 5 or more years of experience in advertising) are more likely to run an ad blocker than average. (66% > 52%) Reminds me of how experienced email users were early adopters of spam filters—the more you know, the more you block. Between sketchy placements, bogus reports and a your call is important to us approach to advertiser support, the advertisers are having a much worse surveillance advertising experience than the rest of us. The Censuswide survey (full report PDF) also shows that more experienced advertisers than ordinary users believe that the Big Tech companies are likely to abuse data.but realistically, who knows if they are or not?

The Tragedy of the Commons is bogus when it comes to actual traditional practices for managing common resources, but it is a thing within large companies. Individual product managers are incentivized to achieve short-term goals either at the expense of other product managers, by dishonest practices that spend down the (common across the whole company) reputation level, or both. For example, within the same large company one business unit can achieve its goals by licensing e-books, while another business unit can achieve its goals by running ads on infringing copies of the same titles. Big Tech fans often ask, if these companies are so distrusted, why do people keep using their products? But another question is, if these companies are so trusted, why do voters keep asking the government to take over managing their products? Privacy settings are hard for users to figure out and easy for companies to override, but a vote for privacy is easier and sticks better. (and possibly the one thing that a bitterly divided nation can agree on)

Doc Searls called ad blocking the biggest boycott in world history back in 2015. Ad blocking looks like a response to creepy practices (or perceived privacy violations if that works better for you) and those practices are part of a more general scam culture crisis. Tressie McMillan Cottom writes,read the whole thing

Scams weaken our trust in social institutions, but their going mainstream—divorced from empathy for the victims or stigma for the perpetrators—means that we have accepted scams as institutions themselves.

I can’t see any one big policy solution for surveillance advertising, tech oligopolies, or the broader scam culture problem. All of that stuff would have to change in order to move the ad blocking numbers. It’s going to take a variety of approaches, maybe including a surveillance advertising ban, maybe a Pigovian tax on databases containing PII, maybe breaking up Big Tech firms. So far the most promising approach seems to be state laws with private right of action, which is one of the reasons I’m so optimistic about Washington State’s My Health My Data Act. My experience on a jury (not an ad-related case) was the most productive meeting I have been in since I came to California. If surveillance advertising issues can grind their way through a few jury trials, where lawyers have an incentive to explain what’s going on in an accurate, comprehensible way, then both surveillance marketers and privacy nerds will be able to reset how we approach this stuff based on more common sense.

Related

Reputation, signaling, and targeted ads

banning surveillance advertising

improving web advertising

Bonus links

Why Does Ad Tech Still Fail To Spot – And Stop – MFA-Fueled Schemes?

Flying Under The Radar Is Not A Realistic Compliance Strategy

7 Things You Should Know About California’s Privacy Watchdog

Class-Action Lawsuit against Google’s Incognito Mode

CONFIRMED: Elon Musk’s X lost a HUGE brand safety certification after our complaint

Nubai Ventures Sues Outbrain, Claiming Its Traffic Is Riddled With Bots

Critics of the TikTok Bill Are Missing the Point

EU signals doubts over legality of Meta’s privacy fee

They Praised AI at SXSW—and the Audience Started Booing

AI Is Threatening My Tech and Lifestyle Content Mill

There is no EU cookie banner law

A Close Up Look at the Consumer Data Broker Radaris

The FTC’s PrivacyCon Was Chock-Full Of Warning Signs For Online Advertising

How Google Blew Up Its Open Culture and Compromised Its Product

The Tech Industry Doesn’t Understand Consent

Tracking ads industry faces another body blow in the EU

Privacy Sandbox’s Latency Issues Will Cost Publishers

Reminder – Google is enforcing stricter rules for consumer finance ad targeting

The Mozilla BlogRachel Hislop reflects on working for Beyoncé, creating community for Black women and the power of storytelling

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with creator Rachel Hislop, a true storyteller at heart that is currently the VP of content and editor-in-chief at OkayAfrica. We talked with Rachel about the ways the internet allows us to tell our own stories, working for Beyoncé and what’s to come in the next chapter of her career.

So the first question I have is what was your favorite Beyoncé project to work on? I want to know. Also, what’s your favorite Beyoncé album?  

I’ll say that my favorite project to work on was definitely Lemonade. It was just so different from anything that had ever existed. It was so culturally relevant. It was well-timed. It was honest and just really beautiful. You can sometimes be jaded by the work when you’re too close to it, and I think that’s across the board in any industry, but there was never a moment of working on that project that I didn’t understand and appreciate just how beautiful everything was. It was really just like, all of you people that I see at work every day, this came out of your minds? And then, it was just a lot of love, and it was a really important time in culture, I think. And I really enjoyed being part of that.

My favorite Beyoncé album … I don’t know that I can answer that because every part has had a very important impact on my life. I had The Writings on the Wall on a cassette tape that I used to play in my little boombox. And Dangerously in Love, I was in high school and experiencing little crushes for the first time. The albums grew. I met Sasha Fierce when I’m in college and learning the dualities of self when I’m away from home, and so on and so forth. So, it’s hard to pick a favorite when each of those moments were so tied to different portions of my life. 

I’m curious to know what types of stories pull you in and influence the work that you do as a writer?

It’s living, honestly. I’m a really curious person, and I think all the best writers are. It’s even weird calling myself a writer sometimes because so much of what I write right now is just for me, and projects that I really believe in. It’s really just the curiosity. I want to know how everything went. How did you get here? What inspired you? I’m good for going out alone and sitting next to a stranger and then learning their whole life story because I am just truly interested in people and there is no parallel in lived life. I also love reading fiction. I am not the girl that’s like, “yeah, self-help books and self-improvement” and things like that. I want to fully escape into a story. I want to be able to turn my brain on and imagine things and fully detach and escape from this world. And then I love reading old magazine articles from when people were allowed to have long, luxurious deadlines and follow subjects for a really long time. I remember this article that they would make us read in journalism school, which was Frank Sinatra has a cold, and I love that voyeurism journalism where if you can’t get to the subject, you’re talking about the things around them. Everything is so interesting to me. I also love TikTok. I learn so much. I think that it’s a really valuable form of storytelling. In short-form, I think it’s really hard — it’s harder than people give a lot of these creators credit for. My grand hope is that those small insights through those short videos are peaking curiosity and sending people on rabbit holes to go discover and read and just be deeper into the internet. I remember back in college there was this internet plugin called stumbled upon and it would roulette the internet and land on these random pages and learn things. That is how my brain is always processing. I’ll see something that’ll interest me and then be like, “I want to know more about that.” 

<figcaption class="wp-element-caption">Rachel Hislop at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

You’ve been in the content media space for a while. What do you think is the biggest misconception people have about working in this space?

You know that saying that if you do what you love every day, you’ll never work a day in your life? I think it’s that. There is a pressure that you feel when you are following your actual passion. This is not just my job, this is what I like to do in my free time, this is what I think is important. This is what I feel like I was born to do, right? To storytell. And every day, it feels like work. And it feels even harder than work because it feels like a calling. From the outside, I’m sure people are like, “Oh, you get to do this cool stuff. You get to talk to all these interesting people. You get to talk about the things that you care about.” But there is truly a pressure to document this stuff in a way that pays homage to everyone and everything that came before it that solidifies its place in history to come, and to handle things with delicacy and care and importance.  There’s just so many layers to it. So, it’s never just about the one thing that you love. It’s about the responsibility that you now have to this thing because you love it. And when you’re working in culture spaces specifically, there is always someone that you have to give their flowers to for you to be able to do the work that you’re doing, but you also have to be forward. You have to look forward and innovate, while you are also honoring what has come before. And there can be a misconception about it like it being easy. Or “I can do that” — we see a lot of that with interviewers I love, where people say, “Shannon Sharpe didn’t ask the right questions. Insert podcaster here who got a really great hit and didn’t ask the right questions.” It’s because we’ve lost the art because people see the end product, they believe that it is easy. And they forget that everything that journalists are doing is in service to the audience and not in service to themselves. And we’re seeing this really weird landscape now where everyone is in service to themselves and to their own popularity and to growing their own audiences, but then they don’t serve those audiences with ethics. That for me is the misconception — that it’s just so easy, anyone could do it. Now everyone is doing it, and they’re wondering where the value is and the premium stories and all of these things like that.

Who are the Black women you’re inspired by and the people you go to when you’re faced with so many of the challenges Black people have in the content/media industry?

I am very intentional about friendships, and I treat my friends like extensions of myself. They’re the board of directors for my life, and these are often friends from college. I think people really discount the friends that you make in your first big girl job and how you’re learning everything together. Those have become the people that I call on throughout my career, who I call on to collaborate with projects, who I call on when things are falling apart. I’ve been just so blessed to have those people in my life as my board of directors for all things. And I serve as the same for them. I am going to tell you the truth: I don’t know that I’ve met a Black woman that I’m not inspired by. I truly am just so in awe of so many women. 

I made a practice after the pandemic of being like, “I don’t want these people who I like online to just be my online friends.” I wanted to meet these people in real life. I started just DM’ing people and being like, “Hey, we’ve been on here a while. Can we grab lunch?” And then just continuing that connection in person has been so, so fantastic.

I do also want to shout out Poynter Institute. I did a training with them in 2019 right before the pandemic for women in leadership positions in newsrooms. When I tell you it was a week-long, intensive, seven days we were in a classroom … it was like therapy for work in a way that I didn’t know I needed, and I didn’t know was available to any of us. We built such strong bonds, even though some people worked at competing publications. But when we put all of that aside, it was just women of all ages and all backgrounds who were working in an industry and really, really cared about the work they were doing and wanted to be their best. And through that group of people, I have just made true, lifelong friends. When things were falling apart in 2020, I was calling on my cohort members and building deep, deep connections from there.

What do you think is the biggest challenge we face in the world this year on and offline? How do we combat it?

There’s so much happening in the world that I think the one thing that I can say that continues to be dangerous is misinformation online. I’m going to speak specifically about Palestine. We see the power of storytelling from the front lines in a way that we have never witnessed before with any conflict, right? And we’ve seen that unfold into just horrors that we would never know were happening if we did not see it coming from the front lines. I just give kudos to the journalists that are on the ground and the people who have become journalists by force who have documented us through situations that we couldn’t even fathom. Even if we tried, we couldn’t fathom what it’s like to work through that. And while that is really helpful and illuminating the evils of the world, the other side of that there’s so much misinformation because everyone is trying to be fast to discredit what we’re seeing with our own eyes and framing that used to be able to take place is not available anymore. And I’m not going to speak to that being a political tool or otherwise, but the framing is not as readily available. I think we saw this with our election in America. Specifically, we’re in an election year — we saw this with our elections, four, eight years ago — and now as technology starts to grow and change faster than we are learning how to master it, I really do believe that misinformation is going to be one of the hardest things to combat. 

I really don’t know how to combat it. Things are just changing so quickly and there’s just so much access to so much. I think the answer as it is with most things is community and people coming together to dream together. There’s not going to be a single person that solves for all of this. It’s going to take collective efforts to help make sure that we’re doing our best.

What is one action that you think everyone should take to make the world and our lives online a little better?

I think everyone can be a little bit more curious. We don’t need to trust things at face value the first time. We need to be more curious about the sources of the information that we are taking in. And it doesn’t mean that we have to be constantly engaging in combat with it. You can take things at face value and then do more research to inform yourself about all sides of a story, and I think that that’s one action that we can take in our day-to-day lives to just be better.  

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope people are celebrating in the next 25 years?

I hope we get those flying cars that we were promised (laughs). But truly, I hope that in 25 years, we are celebrating the earth more. I really do hope that we’re celebrating the earth still housing us and that we’re all just being a little kinder and more thoughtful in the ways that we’re engaging with nature and ourselves, and that people are spending a little bit more time reconnecting with who they are offline.

What gives you hope about the future of our world?

I mentor with the Lower East Side Girls Club, which is a nonprofit here in New York, and we do a mentee outing once a month. The girls are aged middle school through high school, and they are so bright and so well-rounded and smart, but they’re also funny, and they have great social cues and they think so deeply about the world and they’re really compassionate. They don’t allow the things that used to trip me up and trip me and my peers up as like middle schoolers, like they’re so evolved past that. Every time that I think that mentoring means me teaching, I realize that it really means me learning, and when I leave those girls, I’m like, “alright, we’re going to be okay.” They give me some hope.

Get Firefox

Get the browser that protects what’s important

The post Rachel Hislop reflects on working for Beyoncé, creating community for Black women and the power of storytelling appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgPrototype even faster with the Gradio UI for Figma component library

As an industry, generative AI is moving quickly, and so requires teams exploring new ideas and technologies to move quickly as well. To do so, we have been using Gradio, a low-code prototyping toolkit from Hugging Face, to spin up experiments and experiences. Gradio has allowed us to validate concepts through prototyping without large investments of time, effort, or infrastructure.

Although Gradio has made the development phase of prototyping easier, the design phase has been largely the same. Even with Gradio, designers have had to create components in Figma, outline expected user flows and behaviors, and hand off designs for developers in the same way they have always done. While working on a recent exploration, we realized something was needed: a set of Figma components based on Gradio that enabled designers to create wireframes quickly.

Today, we are releasing our library of design components for Gradio for others to use. The components are based on version 4.23.0 of Gradio and will be available through our Figma profile: Mozilla Innovation Projects, https://www.figma.com/@futureatmozilla. We hope these components help teams accelerate their discovery and experimentation with ML and generative AI.

You can find out more about Gradio at https://www.gradio.app/ and more about innovation at Mozilla at https://future.mozilla.org

Thanks to Amy Chiu and Anais Ron who created the components and to the Gradio team for their work. Happy designing!

What’s Inside Gradio UI for Figma?

Because Gradio is an ever-changing prototyping kit, current components are based on version 4.23.0 of Gradio. We selected components based on their wide array of potential uses. Here is a list of the components inside the kit:

  • Typography (e.g. headers, body fonts)
  • Iconography (e.g. chevrons, arrows, corner expanders) 

Small Components:

  • Buttons
  • Checkbox
  • Radio
  • Sliders
  • Tabs
  • Accordion
  • Delete Button
  • Error Message
  • Media Type Labels
  • Media Player Controller

Big Components:

  • Label + Textbox
  • Accordion with Label + Input
  • Video Player
  • Label + Counter
  • Label + Slider
  • Accordion + Label
  • Checkbox with Label
  • Radio with Label
  • Accordion with Content
  • Accordion with Label + Input
  • Top navigation

How to Access and Use Gradio UI for Figma

To start using the library, follow these simple steps:

  1. Access the Library: Access the component library directly by visiting our public Figma profile (https://www.figma.com/@futureatmozilla) or by searching for “Gradio UI for Figma” within the Figma Community section of your web or desktop Figma application.
  2. Explore the Documentation: Familiarize yourself with the components and guidelines to make the most out of your design process.
  3. Connect with Us: Connect with us by following our Figma profile or emailing us at innovations@mozilla.com

The post Prototype even faster with the Gradio UI for Figma component library appeared first on Mozilla Hacks - the Web developer blog.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: March 2024 Progress Report

Featured graphic for "Thunderbird for Android March 2024 Progress Report" with stylized Thunderbird logo and K-9 Mail Android icon, resembling an envelope with dog ears.

If you’ve been wondering how the work to turn K-9 Mail into Thunderbird for Android is coming along, you’ve found the right place. This blog post contains a report of our development activities in March 2024. 

We’ve published monthly progress reports for a while now. If you’re interested in what happened previously, check out February’s progress report. The report for the preceding month is usually linked in the first section of a post. But you can also browse the Android section of our blog to find progress reports and release announcements.

Fixing bugs

For K-9 Mail, new stable releases typically include a lot of changes. K-9 Mail 6.800 was no exception. That means a lot of opportunities to accidentally introduce new bugs. And while we test the app in several ways – manual tests, automated tests, and via beta releases – there’s always some bugs that aren’t caught and make it into a stable version. So we typically spend a couple of weeks after a new major release fixing the bugs reported by our users.

K-9 Mail 6.801

Stop capitalizing email addresses

One of the known bugs was that some software keyboards automatically capitalized words when entering the email address in the first account setup screen. A user opened a bug and provided enough information (❤) for us to reproduce the issue and come up with a fix.

Line breaks in single line text inputs

At the end of the beta phase a user noticed that K-9 Mail wasn’t able to connect to their email account even though they copy-pasted the correct password to the app. It turned out that the text in the clipboard ended with a line break. The single line text input we use for the password field didn’t automatically strip that line break and didn’t give any visual indication that there was one.

While we knew about this issue, we decided it wasn’t important enough to delay the release of K-9 Mail 6.800. After the release we took some time to fix the problem.

DNSSEC? Is anyone using that?

When setting up an account, the app attempts to automatically find the server settings for the given email address. One part of this mechanism is looking up the email domain’s MX record. We intended for this lookup to support DNSSEC and specifically looked for a library supporting this.

Thanks to a beta tester we learned that DNSSEC signatures were never checked. The solution turned out to be embarrassingly simple: use the library in a way that it actually validates signatures.

Strange error message on OAuth 2.0 failure

A user in our support forum reported a strange error message (“Cannot serialize abstract class com.fsck.k9.mail.oauth.XOAuth2Response”) when using OAuth 2.0 while adding their email account. Our intention was to display the error message returned by the OAuth server. Instead an internal error occurred. 

We tracked this down to the tool optimizing the app by stripping unused code and resources when building the final APK. The optimizer was removing a bit too much. But once the issue was identified, the fix was simple enough.

Crash when downloading an attachment

Shortly after K-9 Mail 6.800 was made available on Google Play, I checked the list of reported app crashes in the developer console. Not a lot of users had gotten the update yet. So there were only very few reports. One was about a crash that occurred when the progress dialog was displayed while downloading an attachment. 

The crash had been reported before. But the number of crashes never crossed the threshold where we consider a crash important enough to actually look at. 

It turned out that the code contained the bug since it was first added in 2017. It was a race condition that was very timing sensitive. And so it worked fine much more often than it did not. 

The fix was simple enough. So now this bug is history.

Don’t write novels in the subject line

The app was crashing when trying to send a message with a very long subject line (around 1000 characters). This, too, wasn’t a new bug. But the crash occurred rarely enough that we didn’t notice it before.

The bug is fixed now. But it’s still best practice to keep the subject short!

Work on K-9 Mail 6.802

Even though we fixed quite a few bugs in K-9 Mail 6.801, there’s still more work to do. Besides fixing a couple of minor issues, K-9 Mail 6.802 will include the following changes.

F-Droid metadata

In preparation of building two apps (Thunderbird for Android and K-9 Mail), we moved the app description and screenshots that are used for F-Droid’s app listing to a new location inside our source code repository. We later found out that this new location is not supported by F-Droid, leading to an empty app description on the F-Droid website and inside their app.

We switched to a different approach and hope this will fix the app description once K-9 Mail 6.802 is released.

Push not working due to missing permission

Fresh installs of the app on Android 14 no longer automatically get the permission to schedule exact alarms. But this permission is necessary for Push to work. This was a known issue. But since it only affects new installs and users can manually grant this permission via Android settings, we decided not to delay the stable release until we added a user interface to guide the user through the permission flow.

K-9 Mail 6.802 will include a first step to improve the user experience. If Push is enabled but the permission to schedule exact alarms hasn’t been granted, the app will change the ongoing Push notification to ask the user to grant this permission.

In a future update we’ll expand on that and ask the user to grant the permission before allowing them to enable Push.

What about new features?

Of course we haven’t forgotten about our roadmap. As mentioned in February’s progress report we’ve started work on switching the user interface to use Material 3 and adding/improving Android 14 compatibility.

There’s not much to show yet. Some Material 3 changes have been merged already. But the user interface in our development version is currently very much in a transitional phase.

The Android 14 compatibility changes will be tested in beta versions first, and then back-ported to K-9 Mail 6.8xx.

Releases

In March 2024 we published the following stable release:

There hasn’t been a release of a new beta version in March.

The post Thunderbird for Android / K-9 Mail: March 2024 Progress Report appeared first on The Thunderbird Blog.

Chris H-CHow to go from “Looks like something changed in a Firefox Desktop version” to “Here is a list of potential culprit bugs”

This will mostly be helpful to Firefox Desktop folks, so if you’re not one of those, please instead enjoy a different blogpost. I recommend this one about the three roles of data engagements.

So you’ve found yourself a plot that looks like this:

A timeseries bar plot that begins as an uptake curve then has a sudden drop around February 22. There is no legend and no y-axis as they are unimportant, and sometimes I like to be cagey about absolute figures.

You suspect this has something to do with a code change because, wouldn’t you know it, the sharp decline starts around Feb 22 and we released Firefox 123 on Feb 20. But where do you go from here? Here’s a step-by-step of how I went from this plot arriving in Slack#data-help to finding the bugfix that most likely caused the change:

1. Ensure this is actually a version-specific change

It’s interesting that the cliff in the plot happened near a release day, and it’s an excellent intuition to consider code releases for these sorts of sea-changes in data volume or character. But we should verify that this is the case by grouping by mozfun.norm.truncate_version(app_version, 'major') AS major_version which in our case gives us:

The same timeseries bar plot as before, but coloured to show groups by major Firefox Desktop version. The cliff happens solely in the colours for Firefox 123 and above.

Sure enough, in this case the volume cliff happens entirely within the Firefox 123+ colours. If this isn’t what you get, then it’s somewhat less likely that this is caused by a client code change and this guide might not help you. But for us this is near-certain confirmation that the change in the data is caused by a code change that landed in Firefox 123… but which one?

( This is where I spent a little time checking some frequent “gotcha” changes that could’ve happened. I checked: was it because data went from all-channel to pre-release-only? (No, the probe definitions didn’t change and the fall isn’t severe enough for that (would look more like an order of magnitude)) Was it because specific instrumentation within the group happened to expire in Fx123? (No, the first plot is grouped by specific probe, and all of the groups shared the same shape as their sum) Was it an incredibly-successful engagement-boosting experiment that ended? (No, there haven’t been any relevant experiments since last July) )

2. Figure out which Nightly builds are affected

Firefox Desktop releases new software versions twice a day on the Nightly channel. We can look at the numbers reported by these builds to narrow down what specific 12h period the code landed that caused this drastic shift. Or, well, you’d think we could, but when you group by build_id you get:

Another bar plot, but instead of the x-axis being time it is now "build id" which is a timestamp of a sort. The data is all over the place and patchy with no or little clear pattern.

Because our Nightly population isn’t randomly distributed across timezones, there are usage patterns that affect the population who use which build on which day. And sometimes there are “respins” where specific days will have more than 2 nightlies. And since our Nightly population is so small (You Can Help! Download Nightly Today!), and this data is a little sparse to begin with, little changes have big effects.

No, far more commonly the correct thing to do is to look at what I call a “build day”. This is how GLAM makes things useful, and this is how I make patterns visible. So group by SUBSTR(build_id, 1, 8) AS build_day, and you get:

It looks like a timeseries bar plot, but the x-axis is "build day" so it isn't quite. Notably, there's a sudden cliff starting with the nightlies for January 18.

Much better. We can see that the change likely landed in Jan 18’s nightlies. That Jan 18-20 are all of a level suggests to me that it probably ended up in all of Jan 18’s nightly builds (if it only landed in one of the (normally) two nightly builds we’d expect to see a short fall-off where Jan 18 would be more like an average between Jan 17 and 19.).

Regardless of when during the day, we’re pretty sure we have this nailed down to only one day’s worth of patches! That’s good… but it could be better.

3. Going from build days to pushlog

Ever since I was the human glue keeping the (now-decommissioned) automated regression detection system “alerts.tmo” working, I’ve had a document on my disk reminding me how to transform build days or build_ids into a “pushlog” of changes that landed in the suspect builds. This is how it works:

  1. Get the hg revisions of the suspect builds by looking through this list of all firefox releases for the suspect builds’ ids. You want the final build of the day before the first suspect build day and the final build of the final suspect build day, which in this case are Jan 17 and Jan 18, so we get f593f07c9772 and 9c0c2aab123:
A visual excerpt of the firefox releases list https://hg.mozilla.org/mozilla-central/firefoxreleases. For illustration only.
  1. Put them into this template: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange={}&tochange={} — which gives us https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=f593f07c9772&tochange=9c0c2aaab123

This gives you a list of all changes that are in the suspect builds, plus links to the specific code changes and the relevant bugs, with the topic sentence from each commit right there for you. Handy!

4. Going from a pushlog to a culprit

This is where human pattern matching, domain expertise, organizational memory, culture and practices, and institutional conventions all combine… or, to put it another way, I don’t know how to help you get from the list of all code that could have caused your data change to the one (or more) likely suspects. My brain has handily built me a heuristic and not handed me the source code, alas. But I’ve noticed some patterns:

  • Any change that is backed out can be disregarded. Often for reasons of test failures changes will be backed out and relanded later. Sometimes that’s later the same day. Sometimes that’s outside our pushlog. Skip any changes that have been backed out by disregarding any commits from a bug that is mentioned before a commit that says “Backed out N changesets (bug ###)…”.
  • You can often luck out by just text searching for keywords. It is custom at Mozilla to try to be descriptive about the “what” of a change in the commit’s topic, so you could try looking for “telemetry” or “ping” or “glean” to see if there’s anything from the data collection system itself in there. Or, since this particular example had to do with Firefox Relay’s integration with Firefox Desktop, I looked for “relay” (no hits) and then “form” (which hit a few times, like on the word “information”, … but also on the culprit which was in the form detector code.)
  • This is a web view on the source code, so you’re not limited to what it gives you. If you have a mozilla-central checkout yourself, you can pull up the commits (if you’re using git-cinnabar you can use its hg2git functionality to change the revs from hg to git) and dump their sum-total changes to a viewer, or pipe it through grep, or turn it into a spreadsheet you can go through row-by-row, or anything you want. I’m lazy so I always try keywording on the pushlog first, but these are always there for when I strike out.

5. Getting it wrong

Just because you found the one and only commit that landed in a suspect build that is at all related, even if that commit’s bug specifically mentions that it fixed a double-counting issue, even if there’s commentary in the code review that explains that they expect to see this exact change you just saw… you might be wrong.

Do not be brusque in your reporting. Do not cast blame. And for goodness’ sake be kind. Even if you are correct, being the person who caused a change that resulted in this investigation can be a not-fun experience. Ask Me How I Know.

Firefox Desktop is a complex system, and complex systems fail. It’s in their nature.


And that’s it! If you have any comments, question, or (better yet) improvements, please find me on the #glean:mozilla.org channel on Matrix and I’d love to chat.

:chutten

Mozilla ThunderbirdAutomated Testing: How We Catch Thunderbird Bugs Before You Do

Since the release of Thunderbird 115, a big focus has been on improving the state of our automated testing. Automated testing increases the software quality by minimizing the number of bugs accidentally introduced by changes to the code. For each change made to Thunderbird, our testing machines run a set of tests across Windows, macOS, and Linux to detect mistakes and unintended consequences. For a single change (or a group of changes that land at the same time), 60 to 80 hours of machine time is used running tests.

Our code is going to be under more pressure than ever before – with a bigger team making more changes, and monthly releases reducing the time code spends on testing channels before being released.

We want to find the bugs before our users do.

Why We’re Testing

We’re not writing tests merely to make ourselves feel better. Tests improve Thunderbird by:

  • Preventing mistakes
    If we test that some code behaves in an expected way, we’ll find out immediately if it no longer behaves that way. This means a shorter feedback loop, and we can fix the problem before it annoys the users.
  • Finding out when somebody upstream breaks us
    Thunderbird is built from the Firefox code. The Firefox code, which we are not responsible for, is 30 to 40 times the size of the code we are responsible for. When something inevitably changes in Firefox that affects us, we want to know about it immediately so that we can respond.
  • Freeing up human testers
    If we use computers to prove that the program does what it’s supposed to do, particularly if we avoid tedious repetition and difficult-to-set-up tasks, then the limited human resources we have can do more things that humans are better at.
    For example, I’ve recently added tests that check 22 ways to trigger fetching mail, and 10 circumstances fetching mail might not work. There’s no way our human testers (great though they are) are testing all of them, but our automated tests can and do, several times a day.
  • Thinking through what the code should be doing
    Testing forces an engineer to look at the code from a different point-of-view, and this is helpful to think about what the code is supposed to do in more circumstances. It also makes it easier to prove that the code does work in obscure circumstances.
  • Finding existing bugs
    In software terms we’re working with some very old code, and much of it is untested. Testing it puts a fresh set of eyes on the code and reveals some of the mistakes of the past, and where the ravages of time have broken things. It also helps the person writing the tests to understand what the code does, a lot better than just reading the code does.

We’re not trying to completely cover a feature or every edge case in tests. We are trying to create a testing framework around the feature so that when we find a bug, as well as fixing it, we can easily write a test preventing the bug from happening again without being noticed. For too much of the code, this has been impossible without a weeks-long detour into tests.

Breaking New Ground

In the past few months we’ve figured out how to make automated tests for things that were previously impossible:

  • Communication with mail servers using encrypted channels.
  • OAuth2 authentication with mail servers.
  • Communication with web servers where a specific address must be used and an unencrypted channel must not be used.
  • Servers at any given host name or port. Previously, if we wanted to start a server for automated testing, it had to be on the local machine at a non-standard location. Now we can pretend that the server is anywhere, and using standard ports, which is needed for proper testing of account configuration features. (Actually, this was possible before, but now it’s much easier.)

These new abilities are being used to wrap better testing around account set-up features, ahead of the new Account Hub development, so that we can be sure nothing breaks without being noticed. They’re also helping test that collecting mail works when it should, or gives the error prompts we expect when it doesn’t.

Code coverage

We record every line of code that runs during our tests. Collecting all that data tells what code doesn’t run during our tests. If a block of code doesn’t run during any of our tests, nothing will tell us when it breaks until somebody uses the code and complains.

Our code coverage data can be viewed at coverage.thunderbird.net. You can also look at Firefox’s data at coverage.moz.tools.

Looking at the data, you might notice that our overall number is now lower than it was when we started measuring. This doesn’t mean that our testing got worse, it actually shows where we added a lot of code (that isn’t maintained by us) in the third_party directory. For a better reflection of the progress we’ve made, check out the individual directories, especially mail/base which contains the most important user interface code.

  • Just setting up the code coverage tools and looking at the results uncovered several memory leaks. (A memory leak is where memory is allocated for a task and not released when it is no longer needed.) We fixed these leaks and some more that existed in our test code. We now have very low levels of memory leaking in our test runs, so if we make a mistake it is easy to spot.
  • Code coverage data can also point to code that is no longer used. We’ve removed some big chunks of this dead code, which means we’re not wasting time maintaining it.

Mozmill no more

Towards the end of last year we finally retired an old test suite known as Mozmill. Those tests were partially migrated to a different test suite (Mochitest) about four years ago, and things were mostly working fine so it wasn’t a priority to finish. These tests now do things in a more conventional way instead of relying on a bunch of clever but weird tricks.

How much of the code is test code?

About 27%. This is a very rough estimate based on the files in our code repository (minus some third-party directories) and whether they are inside a directory with “test” in the name or not. That’s risen from about 19% in the last five years.

There is no particular goal in mind, but I can imagine a future where there is as much test code as non-test code. If we achieve that, Thunderbird will be in a very healthy place.

A stacked area chart showing the estimated lines of test code (in red) and non-test code (in blue) over time, from January 2019 to January 2024. The chart indicates both types of code increase over this period.

Looking ahead, we’ll be asking contributors to add tests to their patches more often. This obviously depends on the circumstance. But if you’re adding or fixing something, that is the best time to ensure it continues to work in the future. As always, feel free to reach out if you need help writing or running tests, either via Matrix or Topicbox mailing lists:

Geoff Lankow, Staff Engineer

The post Automated Testing: How We Catch Thunderbird Bugs Before You Do appeared first on The Thunderbird Blog.

Support.Mozilla.OrgKeeping you in the loop: What’s new in our Knowledge Base?

Hello, SUMO community!

We’re setting the stage for something big: a revamp of our style guide designed to make our support content not just user-friendly, but user-delightful. To get a clearer picture of the SUMO user experience, we enlisted the help of an external agency, embarking on a research project designed to peel back the layers of how users interact with our platform. The results were quite revealing. Many users, it turns out, find themselves overwhelmed by the vast amount of information available, often feeling confused and struggling to pinpoint the exact answers they’re searching for. To address this, we’re rolling out targeted improvements focused enhancements to our style guides and contributor resources, aiming to refine how we organize, categorize, and present our support content in SUMO for a smoother, more intuitive user journey.

Have questions or feedback? Drop us a message in this SUMO forum thread.

Refreshing the content taxonomy

A key takeaway from the research was the users’ difficulty in navigating our content categories. This prompted us to rethink our approach to organizing support content, aiming for a better alignment with user needs and industry best practices. This project is in full swing, and we’ll be ready to share more details with you shortly.

Auditing the Firefox content

In our effort to align our content with user needs, we’ve initiated a comprehensive audit of all Firefox support articles. This exhaustive review aims to identify areas where we can reclassify content, eliminate outdated information, and consolidate similar topics. Our goal is to ensure that every piece of information in the KB is relevant, easy to understand, and directly beneficial to our users.

We’re gearing up to share how you can contribute to this exciting initiative. Mark your calendars for the SUMO Community Meeting on Wednesday, April 10, 2024, where we’ll unveil more about this project.

Updating the article types

Using consistent content types for our knowledge base articles has many benefits including ease of navigation and improved clarity and organization, in addition to helping us create content more effectively. We are transitioning to categorizing external knowledge base articles into four types, each serving a specific purpose:

  • About: These articles address “What is…” questions, providing essential information to help readers understand a topic.
  • How-to: These articles focus on answering “How to…?” questions, guiding readers through the steps required to achieve a specific goal or procedure.
  • Troubleshooting: These articles assist users in identifying, diagnosing, and resolving common issues they may encounter with a product, service, or feature by addressing “How to…?” questions related to problem-solving.
  • FAQ: These articles contain concise answers to frequently asked questions on a single topic, which may not fit within other individual KB articles, providing a quick reference for common inquiries.

Stay tuned for additional training and documentation on these article types!

Reducing cognitive load
We believe finding information should not be akin to a mental obstacle course. Focused on minimizing cognitive load, we’ve outlined a series of strategies aimed at guiding users directly to the information they need, no fuss involved. Below are the key strategies we’re implementing:

  • Straight to the point with inline images and icons: We’re transitioning from textual guidance to visual demonstrations. By embedding inline targeted UI captures and icons directly into the article flow, we aim to provide a more visual path for users, minimizing the need for mental translation of text into actions. But, hang on – we haven’t forgotten about making these changes work for everyone. For those using screen readers, we’re counting on you to help us ensure every image and icon comes with comprehensive alt text, making every visual accessible through sound. And on the localization front, your skills are more important than ever. We’re calling on you to assist in capturing and adding alt text to localized images, ensuring it’s accessible and resonant for every member of our global community. For details see Effective use of inline images.
  • Cleaner, more focused images with SUI (simplified user interfaces): To make things even clearer, we’re simplifying our product’s UI in screenshots to just the essentials. This not only makes the images easier to follow but also means they’ll stay accurate longer, even if small UI changes happen. For more info, see Simplifications.
  • Streamlined steps with annotated screenshots: For tasks that necessitate two or more clicks or actions on a single screen, we’re shifting to a more intuitive approach: using screenshots marked with numbered annotations. This strategy will clear away the need for multiple, similar screenshots, making instructions easier to follow while minimizing scrolling.

Keep an eye out for the updated style guides – they’re coming soon!

What this means for you

Our updates will be rolling out from Q2 to Q4 2024, and we’re thrilled to have you on board as we bring these changes to life. The kickoff is just around the corner, so stay tuned for updates! Have thoughts to share or looking to contribute? We’re all ears. Engage with us directly on this SUMO forum thread. Your feedback and involvement are crucial as we progress together.

Thank you for making a difference!

The Rust Programming Language BlogAnnouncing Rust 1.77.2

The Rust team has published a new point release of Rust, 1.77.2. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.77.2 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.77.2

This release includes a fix for CVE-2024-24576.

Before this release, the Rust standard library did not properly escape arguments when invoking batch files (with the bat and cmd extensions) on Windows using the Command API. An attacker able to control the arguments passed to the spawned process could execute arbitrary shell commands by bypassing the escaping.

This vulnerability is CRITICAL if you are invoking batch files on Windows with untrusted arguments. No other platform or use is affected.

You can learn more about the vulnerability in the dedicated advisory.

Contributors to 1.77.2

Many people came together to create Rust 1.77.2. We couldn't have done it without all of you. Thanks!

The Rust Programming Language BlogChanges to Rust's WASI targets

WASI 0.2 was recently stabilized, and Rust has begun implementing first-class support for it in the form of a dedicated new target. Rust 1.78 will introduce new wasm32-wasip1 (tier 2) and wasm32-wasip2 (tier 3) targets. wasm32-wasip1 is an effective rename of the existing wasm32-wasi target, freeing the target name up for an eventual WASI 1.0 release. Starting Rust 1.78 (May 2nd, 2024), users of WASI 0.1 are encouraged to begin migrating to the new wasm32-wasip1 target before the existing wasm32-wasi target is removed in Rust 1.84 (January 5th, 2025).

In this post we'll discuss the introduction of the new targets, the motivation behind it, what that means for the existing WASI targets, and a detailed schedule for these changes. This post is about the WASI targets only; the existing wasm32-unknown-unknown and wasm32-unknown-emscripten targets are unaffected by any changes in this post.

Introducing wasm32-wasip2

After nearly five years of work the WASI 0.2 specification was recently stabilized. This work builds on WebAssembly Components (think: strongly-typed ABI for Wasm), providing standard interfaces for things like asynchronous IO, networking, and HTTP. This will finally make it possible to write asynchronous networked services on top of WASI, something which wasn't possible using WASI 0.1.

People interested in compiling Rust code to WASI 0.2 today are able to do so using the cargo-component tool. This tool is able to take WASI 0.1 binaries, and transform them to WASI 0.2 Components using a shim. It also provides native support for common cargo commands such as cargo build, cargo test, and cargo run. While it introduces some inefficiencies because of the additional translation layer, in practice this already works really well and people should be able to get started with WASI 0.2 development.

We're however keen to begin making that translation layer obsolete. And for that reason we're happy to share that Rust has made its first steps towards that with the introduction of the tier 3 wasm32-wasip2 target landing in Rust 1.78. This will initially miss a lot of expected features such as stdlib support, and we don't recommend people use this target quite yet. But as we fill in those missing features over the coming months, we aim to eventually meet the criteria to become a tier 2 target, at which point the wasm32-wasip2 target would be considered ready for general use. This work will happen through 2024, and we expect for this to land before the end of the calendar year.

Renaming wasm32-wasi to wasm32-wasip1

The original name for what we now call WASI 0.1 was "WebAssembly System Interface, snapshot 1". Rust shipped support for this in 2019, and we did so knowing the target would likely undergo significant changes in the future. With the knowledge we have today though, we would not have chosen to introduce the "WASI, snapshot 1" target as wasm32-wasi. We should have instead chosen to add some suffix to the initial target triple so that the eventual stable WASI 1.0 target can just be called wasm32-wasi.

In anticipation of both an eventual WASI 1.0 target, and to preserve consistency between target names, we'll begin rolling out a name change to the existing WASI 0.1 target. Starting in Rust 1.78 (May 2nd, 2024) a new wasm32-wasip1 target will become available. Starting Rust 1.81 (September 5th, 2024) we will begin warning existing users of wasm32-wasi to migrate to wasm32-wasip1. And finally in Rust 1.84 (January 9th, 2025) the wasm32-wasi target will no longer be shipped on the stable release channel. This will provide an 8 month transition period for projects to switch to the new target name when they update their Rust toolchains.

The name wasip1 can be read as either "WASI (zero) point one" or "WASI preview one". The official specification uses the "preview" moniker, however in most communication the form "WASI 0.1" is now preferred. This target triple was chosen because it not only maps to both terms, but also more closely resembles the target terminology used in other programming languages. This is something the WASI Preview 2 specification also makes note of.

Timeline

This table provides the dates and cut-offs for the target rename from wasm32-wasi to wasm32-wasip1. The dates in this table do not apply to the newly-introduced wasm32-wasi-preview1-threads target; this will be renamed to wasm32-wasip1-threads in Rust 1.78 without going through a transition period. The tier 3 wasm32-wasip2 target will also be made available in Rust 1.78.

date Rust Stable Rust Beta Rust Nightly Notes
2024-02-08 1.76 1.77 1.78 wasm32-wasip1 available on nightly
2024-03-21 1.77 1.78 1.79 wasm32-wasip1 available on beta
2024-05-02 1.78 1.79 1.80 wasm32-wasip1 available on stable
2024-06-13 1.79 1.80 1.81 warn if wasm32-wasi is used on nightly
2024-07-25 1.80 1.81 1.82 warn if wasm32-wasi is used on beta
2024-09-05 1.81 1.82 1.83 warn if wasm32-wasi is used on stable
2024-10-17 1.82 1.83 1.84 wasm32-wasi unavailable on nightly
2024-11-28 1.83 1.84 1.85 wasm32-wasi unavailable on beta
2025-01-09 1.84 1.85 1.86 wasm32-wasi unavailable on stable

Conclusion

In this post we've discussed the upcoming updates to Rust's WASI targets. Come Rust 1.78 the wasm32-wasip1 (tier 2) and wasm32-wasip2 (tier 3) targets will be added. In Rust 1.81 we will begin warning if wasm32-wasi is being used. And in Rust 1.84, the existing wasm32-wasi target will be removed. This will free up wasm32-wasi to eventually be used for a WASI 1.0 target. Users will have 8 months to switch to the new target name when they update their Rust toolchains.

The wasm32-wasip2 target marks the start of native support for WASI 0.2. In order to target it today from Rust, people are encouraged to use cargo-component tool instead. The plan is to eventually graduate wasm32-wasip2 to a tier-2 target, at which point cargo-component will be upgraded to support it natively instead.

With WASI 0.2 finally stable, it's an exciting time for WebAssembly development. We're happy for Rust to begin implementing native support for WASI 0.2, and we're excited about what this will enable people to build.

Tiger OakesResizeObserver is a safe place to read scrollWidth/clientWidth

Avoid forced synchronous layout and safely read an element's size.

The Rust Programming Language BlogSecurity advisory for the standard library (CVE-2024-24576)

The Rust Security Response WG was notified that the Rust standard library did not properly escape arguments when invoking batch files (with the bat and cmd extensions) on Windows using the Command API. An attacker able to control the arguments passed to the spawned process could execute arbitrary shell commands by bypassing the escaping.

The severity of this vulnerability is critical if you are invoking batch files on Windows with untrusted arguments. No other platform or use is affected.

This vulnerability is identified by CVE-2024-24576.

Overview

The Command::arg and Command::args APIs state in their documentation that the arguments will be passed to the spawned process as-is, regardless of the content of the arguments, and will not be evaluated by a shell. This means it should be safe to pass untrusted input as an argument.

On Windows, the implementation of this is more complex than other platforms, because the Windows API only provides a single string containing all the arguments to the spawned process, and it's up to the spawned process to split them. Most programs use the standard C run-time argv, which in practice results in a mostly consistent way arguments are splitted.

One exception though is cmd.exe (used among other things to execute batch files), which has its own argument splitting logic. That forces the standard library to implement custom escaping for arguments passed to batch files. Unfortunately it was reported that our escaping logic was not thorough enough, and it was possible to pass malicious arguments that would result in arbitrary shell execution.

Mitigations

Due to the complexity of cmd.exe, we didn't identify a solution that would correctly escape arguments in all cases. To maintain our API guarantees, we improved the robustness of the escaping code, and changed the Command API to return an InvalidInput error when it cannot safely escape an argument. This error will be emitted when spawning the process.

The fix will be included in Rust 1.77.2, to be released later today.

If you implement the escaping yourself or only handle trusted inputs, on Windows you can also use the CommandExt::raw_arg method to bypass the standard library's escaping logic.

Affected Versions

All Rust versions before 1.77.2 on Windows are affected, if your code or one of your dependencies executes batch files with untrusted arguments. Other platforms or other uses on Windows are not affected.

Acknowledgments

We want to thank RyotaK for responsibly disclosing this to us according to the Rust security policy, and Simon Sawicki (Grub4K) for identifying some of the escaping rules we adopted in our fix.

We also want to thank the members of the Rust project who helped us disclose the vulnerability: Chris Denton for developing the fix; Mara Bos for reviewing the fix; Pietro Albini for writing this advisory; Pietro Albini, Manish Goregaokar and Josh Stone for coordinating this disclosure; Amanieu d'Antras for advising during the disclosure.

Mozilla Privacy BlogMozilla provides feedback to ACM’s DSA Guidelines

The EU’s Digital Services Act (DSA) has taken effect, ushering in a new era of accountability, transparency, and responsibility for digital platforms. Mozilla has actively supported the DSA –  and its aim to build a safer digital ecosystem – since the legislation was first proposed, and continues to contribute to conversations about how to implement it effectively.

Technology companies that offer services in the EU must “designate a sufficiently mandated legal representative in the Union and provide information relating to their legal representatives to the relevant authorities,” and each EU country must appoint a Digital Services Coordinator to interpret and enforce the DSA.

In January of this year the Authority for Consumers and Markets in the Netherlands (ACM) published draft guidelines for its interpretation and enforcement of the DSA. Mozilla recently provided feedback, focused largely on areas where further detail or clarification would be helpful, as well as discussing challenges small and mid sized platforms may face during implementation.

Specifically, Mozilla recommended the following:

  • Clarification of “ancillary services.”  

The ACM’s draft guidelines note that Recital 13 of the DSA exempts “ancillary services” where, as with the comment section of a newspaper’s website, “the possibility of posting comments… is only an incidental characteristic of the main service.”  Mozilla recommends that this “ancillary services” exception also expressly include services for tech support and product feedback, and similar platforms that exist only to support a primary product that is not subject to DSA. Such forums are clearly ancillary to the main products, as their purpose is to help address bugs and other product-specific issues within those products.

  • Refining the definition of “traders.” 

The DSA imposes additional requirements on platforms that host B2C online marketplaces, by requiring that the platforms track and store data about “traders” that operate on their platform.  DSA Recital 23, which presumes that traders in an online marketplace are offering goods or services for a price, highlights that this provision is intended to cover those platforms that facilitate online commerce. Mozilla recommends that the guidelines make this intent clear, by expressly stating that: (i) “traders” do not include those providing free online services, and (ii) platforms which do not incur profits or facilitate the exchange of money are not B2C online marketplaces.

  • Allowing platforms the flexibility to address spam.

The DSA’s obligations do not apply when platforms act to address “deceptive, high-volume commercial content.” For effective implementation of the guidelines, we believe there needs to be more clarification of how such content is defined. The ACM guidance indicates that the exception applies where someone intentionally manipulates a service through the use of bots, fake accounts, or deceptive practices. Mozilla recommends that the guidance be supplemented to ensure that platforms have the ability to address evolving threats: including clarifying that the references to bots and fake accounts are non-exhaustive examples and not intended to further constrain the spam exception, and establishing a plan to periodically update the guidance to address changing circumstances and developing technologies.

  • Clarifying the Statement of Reasons requirement.

Both the DSA itself and the ACM guidance require platforms to provide a statement of reasons whenever they moderate content or restrict a user account, explaining the legal or contractual provision on which their action was based. Mozilla asked that ACM provide additional details on what such statements should contain; this would provide greater clarity and standardization for platforms and ensure that moderation (particularly of illegal content) remains workable at scale.

  • Allowing platforms flexibility on suspensions.

The ACM guidance allows a platform to permanently suspend users for “manifestly illegal content related to serious crimes.”  However, it requires that a platform always issue a warning, before suspending a user. Mozilla recommends that the ACM expressly confirm platforms have the right to suspend users for violating their Terms of Service, even if their activity is not illegal.  Mozilla also recommends that the warning requirement be clarified, and reduced in cases where having to warn a user might prevent platforms from responding to serious offenses in a timely manner.

As a longtime advocate for the DSA and for platform accountability, Mozilla is enthusiastic about the legislation’s potential to create a safer Internet ecosystem for all. Our comments to ACM, and our ongoing work on this subject, aims to further that goal, without overly burdening small and mid-size platforms.  We look forward to working with the ACM and other European regulators in the coming months, as this legislation continues to take shape.

The post Mozilla provides feedback to ACM’s DSA Guidelines appeared first on Open Policy & Advocacy.

Mozilla ThunderbirdThunderbird Time Machine: Was Thunderbird 3.0 Worth The Wait?

Let’s step back into the Thunderbird Time Machine and teleport ourselves back to December 2009. If you were on the bleeding edge, maybe you were upgrading your computer to the newly released Windows 7 (or checking out Ubuntu 9.10 “Karmic Koala”.) Perhaps you were pouring all your free time into Valve’s ridiculously fun team-based survival shooter Left 4 Dead 2. And maybe, just maybe, you were eagerly anticipating installing Thunderbird 3.0 — especially since it had been a lengthy two years since Thunderbird 2.0 had launched.

What happened during those two years? The Thunderbird developer community — and Mozilla Messaging — clearly stayed busy and productive. Thunderbird 3.0 introduced several new feature milestones!

1) The Email Account Wizard

We take it for granted now, but in the 2000s, adding an account to an email client wasn’t remotely simple. Traditionally you needed to know your IMAP/POP3 and SMTP server URLs, port numbers, and authentication settings. When Thunderbird 3.0 launched, all that was required was your username and password for most mainstream email service providers like Yahoo, Hotmail, or Gmail. Thunderbird went out and detected the rest of the settings for you. Neat!

2) A New Tabbed Interface

With Firefox at its core, Thunderbird followed in the footsteps of most web browsers by offering a tabbed interface. Imagine! Being able to quickly tab between various searches and emails without navigating a chaotic mess of separate windows!

3) A New Add-on Manager

<figcaption class="wp-element-caption">Screenshot from HowToGeek’s Thunderbird 3.0 review.</figcaption>

Speaking of Firefox, Thunderbird quickly adopted the same kind of Add-on Manager that Firefox had recently integrated. No need to fire up a browser to search for useful extensions to Thunderbird — now you could search and install new functionality from right inside Thunderbird itself.

4) Advanced Search Options

Searching your emails got a massive boost in Thunderbird 3.0. Advanced filtering tools means you could filter your results by sender, attachments, people, folders, and more. A shiny new timeline view was also introduced, letting you jump directly to a certain date’s results.

5) The Migration Assistant

Tying this all together was a simple but wonderful migration assistant. It served as a way to introduce users to certain new features (like per-account IMAP synchronization), and visually toggle them on or off (useful for displaying the revised Message Toolbar and giving users a choice of where to enjoy it). To me, this particular addition felt ahead of its time. We’ve been discussing the idea of re-introducing it in a future Thunderbird release, but one of the steep hurdles to doing so now is localization. If it’s something you’d like to see, let us know in the comments.

Try It Out For Yourself

If you want to personally step into the Thunderbird Time Machine, every version ever released for Windows, Linux, and macOS is available in this archive. I ran mine inside of a Windows 7 virtual machine, since my native Linux install complained about missing libraries when trying to get Thunderbird 3.0 running.

Regardless if you’re a new Thunderbird user or a veteran who’s been with us since 2003, thanks for being on the journey with us!

Previous Time Machine Destinations:

The post Thunderbird Time Machine: Was Thunderbird 3.0 Worth The Wait? appeared first on The Thunderbird Blog.

William DurandIntroducing xpidump

I wrote xpidump to give a human-readable summary of some information about a Firefox add-on. It is designed to answer these two questions: is the add-on likely1 signed? And if so, how?

This tool takes an XPI file as input. XPI files are Firefox add-ons packaged as ZIP archives with the .xpi file extension. xpidump currently extracts information from up to 4 files in an XPI (depending on what is available in the archive):

  • manifest.json: this JSON file defines the add-on. It is required in any add-on (packaged or not, signed or not).
  • META-INF/mozilla.rsa: this binary file is a PKCS#7 signature. Any signed add-on should have this file (in addition to META-INF/mozilla.sf and META-INF/manifest.mf but xpidump doesn’t read them).
  • META-INF/cose.sig: this binary file is a COSE signature2. It might not be present when the add-on isn’t signed or relatively old3. There should also be a META-INF/cose.manifest file when this file exists.
  • mozilla-recommendation.json: this JSON file is generated by Mozilla’s signing service Autograph when an add-on is signed with recommendation states. This is how Firefox knows that an add-on is recommended for instance. It might or might not be present.

xpidump is both a command-line tool and a web app since the latter is usually more convenient (no need to install anything). It’s written in Rust, and compiled to WebAssembly for the web app. You can try it at: williamdurand.fr/xpidump/.

xpidump is available in the browser thanks to WebAssembly!

The code is published on GitHub under the MIT license, see: willdurand/xpidump. I don’t have any more plans for this weekend-ish project, it’s doing what I wanted it to be doing… Let me know if you have ideas, though.

One more thing while we’re here… If you want a tool to read the entire content of any XPI file and get tons of information, you want to use CRX Viewer created by my brilliant colleague Rob!

  1. This tool doesn’t verify the signature, so it cannot guarantee that the add-on is correctly signed. 

  2. Well, a COSE sign message with a signature and serialized with CBOR, though we should probably refer to the protocol as COSEish because of this issue

  3. Mozilla started to dual-sign add-ons in 2019. 

Niko MatsakisOwnership in Rust

Ownership is an important concept in Rust — but I’m not talking about the type system. I’m talking about in our open source project. One of the big failure modes I’ve seen in the Rust community, especially lately, is the feeling that it’s unclear who is entitled to make decisions. Over the last six months or so, I’ve been developing a project goals proposal, which is an attempt to reinvigorate Rust’s roadmap process — and a key part of this is the idea of giving each goal an owner. I wanted to write a post just exploring this idea of being an owner: what it means and what it doesn’t.

Every goal needs an owner

Under my proposal, the project will identify its top priority goals, and every goal will have a designated owner. This is ideally a single, concrete person, though it can be a small group. Owners are the ones who, well, own the design being proposed. Just like in Rust, when they own something, they have the power to change it.1

Just because owners own the design does not mean they work alone. Like any good Rustacean, they should treasure dissent, making sure that when a concern is raised, the owner fully understands it and does what they can to mitigate or address it. But there always comes a point where the tradeoffs have been laid on the table, the space has been mapped, and somebody just has to make a call about what to do. This is where the owner comes in. Under project goals, the owner is the one we’ve chosen to do that job, and they should feel free to make decisions in order to keep things moving.

Teams make the final decision

Owners own the proposal, but they don’t decide whether the proposal gets accepted. That is the job of the team. So, if e.g. the goal in question requires making a change to the language, the language design team is the one that ultimately decides whether to accept the proposal.

Teams can ultimately overrule an owner: they can ask the owner to come back with a modified proposal that weighs the tradeoffs differently. This is right and appropriate, because teams are the ones we recognize as having the best broad understanding of the domain they maintain.2 But teams should use their power judiciously, because the owner is typically the one who understands the tradeoffs for this particular goal most deeply.

Ownership is empowerment

Rust’s primary goal is empowerment — and that is as true for the open-source org as it is for the language itself. Our goal should be to empower people to improve Rust. That does not mean giving them unfettered ability to make changes — that would result in chaos, not an improved version of Rust — but when their vision is aligned with Rust’s values, we should ensure they have the capability and support they need to realize it.

Ownership requires trust

There is an interesting tension around ownership. Giving someone ownership of a goal is an act of faith — it means that we consider them to be an individual of high judgment who understands Rust and its values and will act accordingly. This implies to me that we are unlikely to take a goal if the owner is not known to the project. They don’t necessarily have to have worked on Rust, but they have to have enough of a reputation that we can evaluate whether they’re going to do a good job.’

The design of project goal proposals includes steps designed to increase trust. Each goal includes a set of design axioms identifying the key tradeoffs that are expected and how they will be weighed against one another. The goal also identifies milestones, which shows that the author has thought about how to breakup and approach the work incrementally.

It’s also worth highlighting that while the project has to trust the owner, the reverse is also true: the project hasn’t always done a good job of making good on its commitments. Sometimes we’ve asked for a proposal on a given feature and then not responded when it arrives.3 Or we set up unbounded queues that wind up getting overfull, resulting in long delays.

The project goal system has steps to build that kind of trust too: the owner identifies exactly the kind of support they expect to require from the team, and the team commits to provide it. Moreover, the general expectation is that any project goal represents an important priority, and so teams should prioritize nominated issues and the like that are related.

Trust requires accountability

Trust is something that has to be maintained over time. The primary mechanism for that in the project goal system is regular reporting. The idea is that, once we’ve identified a goal, we will create a tracking issue. Bots will prompt owners to give regular status updates on the issue. Then, periodically, we will post a blog post that aggregates these status updates. This gives us a chance to identify goals that haven’t been moving — or at least where no status update has been provided — and take a look as to see why.

In my view, it’s expected and normal that we will not make all our goals. Things happen. Sometimes owners get busy with other things. Other times, priorities change and what was once a goal no longer seems relevant. That’s fine, but we do want to be explicit about noticing it has happened. The problem is when we let things live in the dark, so that if you want to really know what’s going on, you have to conduct an exhaustive archaeological expedition through github comments, zulip threads, emails, and sometimes random chats and minutes.

Conclusion

Rust has strong values of being an open, participatory language. This is a good thing and a key part of how Rust has gotten as good as it is. Rust’s design does not belong to any one person. A key part of how we enforce that is by making decisions by consensus.

But people sometimes get confused and think consensus means that everyone has to agree. This is wrong on two levels:

  • The team must be in consensus, not the RFC thread: in Rust’s system, it’s the teams that ultimately make the decision. There have been plenty of RFCs that the team decided to accept despite strong opposition from the RFC thread (e.g., the ? operator comes to mind). This is right and good. The team has the most context, but the team also gets input from many other sources beyond the people that come to participate in the RFC thread.
  • Consensus doesn’t mean unanimity: Being in consensus means that a majority agrees with the proposal and nobody thinks that it is definitely wrong. Plenty of proposals are decided where team members have significant, even grave, doubts. But ultimately tradeoffs must be made, and the team members trust one another’s judgment, so sometimes proposals go forward that aren’t made the way you would do it.

The reality is that every good thing that ever got done in Rust had an owner – somebody driving the work to completion. But we’ve never named those owners explicitly or given them a formal place in our structure. I think it’s time we fixed that!


  1. Hat tip to Jack Huey for this turn of phrase. Clever guy. ↩︎

  2. There is a common misunderstanding that being on a Rust team for a project X means you are the one authoring code for X. That’s not the role of a team member. Team members hold the overall design of X in their heads. They review changes and mentor contributors who are looking to make a change. Of course, team members do sometimes write code, too, but in that case they are playing the role of a (particularly knowledgable) contributor. ↩︎

  3. I still feel bad about delegation. ↩︎

Mozilla Security BlogRapidly Leveling up Firefox Security

At Mozilla, we believe in an open web that is safe to use. To that end, we improve and maintain the security of people using Firefox around the world. This includes a solid track record of responding to security bugs in the wild, especially with bug bounty programs such as Pwn2Own. As soon as we discover a critical security issue in Firefox, we plan and ship a rapid fix. This post describes how we recently fixed an exploit discovered at Pwn2Own in less than 21 hours, a success only made possible through the collaborative and well-coordinated efforts of a global cross-functional team of release and QA engineers, security experts, and other stakeholders.

A Bit Of Context

Pwn2Own is an annual computer hacking contest where participants aim to find security vulnerabilities in major software such as browsers. Two weeks ago, this event took place in Vancouver, Canada, where participants investigated everything from Chrome, Firefox, and Safari to MS Word and even the code currently running on your car. Without getting into the technical details of the exploit here, this blog post will describe how Mozilla quickly responds to and ships updated builds for exploits found during Pwn2Own.

To give you a sense of scale, Firefox is a massive piece of software: 30 million+ lines of code, six platforms (Windows 32 & 64bit, GNU/Linux 32 & 64bit, Mac OS X and Android), 90 languages, plus installers, updaters, etc. Releasing such a beast involves coordination across many cross-functional teams spanning the entire globe.

The timing of the Pwn2Own event is known weeks beforehand, so Mozilla is always ready when it rolls around! The Firefox train release calendar takes into consideration the timing of Pwn2Own. We try not to ship a new version of Firefox to end users on the release channel on the same day as Pwn2Own to hopefully avoid multiple updates close together. This also means that we are prepared to ship a patched version of Firefox as soon as we know what vulnerabilities were discovered if any at all.

So What Happened?

The specific exploit disclosed at Pwn2Own consisted of two bugs, a necessity when typical web content is rendered inside of a proverbial browser sandbox: These two sophisticated exploits took an admirable amount of effort to reveal and leverage. Nevertheless, as soon as it was discovered, Mozilla engineers got to work, shipping a new release within 21 hours! We certainly weren’t the only browser “pwned”, but we were the first of all, to patch our vulnerability. That’s right: before you knew about this exploit, we had already protected you from it.

As scary as this might sound, Sandbox Escapes, like many web browser exploits, are an issue common to all browsers, thanks to the evolving nature of the internet. Firefox developers are always eager to find and resolve these security issues as quickly as possible to ensure our users stay safe. We do this continuously by shipping new mitigations like win32k lockdown, site isolation, investing in security fuzzing, and promoting bug bounties for similar escapes. In the interest of openness and transparency, we also continuously invite and reward security researchers who share their newest attacks, which helps us keep our product safe even when there isn’t a Pwn2Own to participate in.

Related Resources

If you’re interested in learning more about Mozilla’s security initiatives or Firefox security, here are some resources to help you get started:

Mozilla Security
Mozilla Security Blog
Bug Bounty Program
Mozilla Security playlist on YouTube

Furthermore, if you want to kickstart your own security research in Firefox, we invite you to follow our deeply technical blog at Attack & Defense – Firefox Security Internals for Engineers, Researchers, and Bounty Hunters .

Past Pwn2Own Blog: https://hacks.mozilla.org/2018/03/shipping-a-security-update-of-firefox-in-less-than-a-day/

The post Rapidly Leveling up Firefox Security appeared first on Mozilla Security Blog.

Mozilla ThunderbirdThunderSnap! Why We’re Helping Maintain The Thunderbird Snap On Linux

We love our Linux users across all Linux distributions. That is why we’ve stepped up to help maintain the Thunderbird Snap available in the Snap Store.

Last year we took ownership of the Thunderbird Flatpak, and it has been our officially recommended package for Linux users. However, we are expanding our horizons to make sure the Thunderbird Snap experience is officially supported too. We at Thunderbird are team “free software”, independent of the packaging technology. This will mostly affect our Ubuntu users but there are plenty of other Snap users out there as well. 

Why support both the Snap and Flatpak?

In the spirit of free software, we want to support as many of our users as possible without discriminating on their package preferences. We are not a large company with infinite resources, so we can’t support everything under the sun. But we can make informed decisions that reach the majority of our Linux users.

The Thunderbird Snap has been well maintained by the Ubuntu desktop team for years, and we felt it was time to step up and help out.

What does this mean for me?

If you are an Ubuntu user, then you may already be using the Thunderbird Snap. The next release of Ubuntu is 24.04 (available April 25) and will be the first Ubuntu release that seeds the Thunderbird Snap on the ISO. So if you do a fresh full install of Ubuntu, you will be using the Thunderbird Snap that you know is directly supported by the Thunderbird team.

If you are not an Ubuntu user but Snaps are still a part of your life, then you will still benefit from the same rolling updates provided by the Snap experience.

What changes are expected?

From a user perspective, you should see no changes. Just keep using whichever Thunderbird Snap channel you are comfortable with.

From a developer perspective, we have added the Snap build to our build infrastructure on treeherder. This means whenever a full build is triggered automatically from commits, the Snap is built as well for testing. Whenever the build is one we want to release to the public, this will trigger a general flow:

  1. A version bump is pushed to the existing Thunderbird Snap github repository.
  2. The existing launchpad mirror will pick up this change and automatically build the Snap for x86 and arm64.
  3. If the launchpad Snap build succeeds, the Snap will be uploaded to the designated Snap store channel.

So all we are changing is adding the snap build into the Thunderbird build infrastructure and plugging it into the existing automation that feeds the snap store. 

Where do I report a bug on the Thunderbird Snap?

As with all supported package types of Thunderbird, we would like bugs about the Thunderbird Snap to be reported on bugzilla.mozilla.org under the Thunderbird project.

The post ThunderSnap! Why We’re Helping Maintain The Thunderbird Snap On Linux appeared first on The Thunderbird Blog.

Mozilla Addons BlogDeveloper Spotlight: Control Panel for Twitter

You can’t predict how or when success will come. In the case of Control Panel for Twitter — a Firefox extension that gives users authority over the amount of algorithmic content they’re fed — it went viral in Japan a few years ago and word spread fast. One devoted fan even jumped into the open-source code and quickly localized the extension in Japanese, further catapulting its appeal. Today, Control Panel for Twitter has more than 250,000 users from all over the world enjoying it across various browsers.

A comprehensive Options page gives you easy, intuitive control over your Twitter/X experience.

“Most of my extensions are for sites I’m a long-time user of, fixing issues which bug me, and adding missing features,” explains developer Jonny Buchannon. One of the first issues he addressed was designing a feature that moved retweets into a separate tab.

“If you don’t like the algorithmic ‘For you’ timeline, it’s usually because it’s full of random tweets about topics you’re not interested in, or worse, deliberate engagement bait. If you look at all the retweets in your timeline, they tend to have a similar problem,” explains Buchannon. “By default, following someone on Twitter lets them put any tweet in your timeline with no effort — a single click or tap — without having to add their own comment, and sometimes they do that because the tweet in question made them feel strong negative emotions; sometimes people will also retweet a string of tweets about similar topics, filling up your timeline.”

To fix this problem the extension swaps the “For you” timeline for the “Following” (chronological) version. Control Panel for Twitter can also hide other types of Twitter/X content like the “See new Tweets” button, “Who to follow,” “Follow some topics,” all the X Premium upsell prompts, and more.

Even with gobs of current customization features, Buchannon says there’s a “huge backlog” of potential enhancements in their GitHub Issues. New features coming soon include the ability to control what you see in Notifications (like hiding Likes and retweets) and improvements viewing a conversation under a focused tweet.

App-solutely atrocious experience — try Twitter/X on the mobile web!

Control Panel for Twitter is also available on Firefox for Android (addons.mozilla.org [AMO] recently launched an open ecosystem of extensions on Firefox for Android). While it may seem strange to use a mobile browser to access Twitter/X instead of the app, Buchannon says he primarily added mobile support for his own personal use. “I’m the #1 user on that front,” he says before issuing a “warning” to prospective users of his extension on Firefox for Android: “Once you get used to the changes Control Panel for Twitter makes to the experience, default Twitter is unusable — be it the app or the website.”

There are also mobile-specific features, such as changes it brings to Twitter/X search functionality. In standard Twitter/X, when you tap the Search nav you’re brought to the Explore page, which is loaded with algorithmic content. Control Panel for Twitter can hide that so you’re simply presented with a streamlined search field.

Apparently Buchannon isn’t alone in his preference to experience the mobile web version of Twitter/X while using his extension. He claims Control Panel for Twitter has only been available on the App Store for Safari for a little over a year, but already 78% of its Safari users are using it on the iPhone.

Based on the same philosophical functionality as Control Panel for Twitter, Buchannon just released Control Panel for YouTube.

“One of the main focuses of the initial version was improving the Subscription pages by automatically hiding any content you don’t want to see in there like Shorts, live streams, ‘upcoming’ videos you can’t watch now, and hiding videos you’ve already watched, so it acts more like an inbox, where videos disappear as you watch them.”

Sounds great, can’t wait to try it out. Less is often more with social media.

Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Control Panel for Twitter appeared first on Mozilla Add-ons Community Blog.