Why we collect telemetry
...our team needs visibility into how features are being used in practice. We use this data to prioritize our work and evaluate whether features are meeting real user needs.
I'm curious why corporate development teams always feel the need to spy on their users? Is it not sufficient to employ good engineering and design practices? Git has served us well for 20+ years without detailed analytics over who exactly is using which features and commands. Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?
> I'm curious why corporate development teams always feel the need to spy on their users?
This isn’t that surprising to me. Having usage data is important for many purposes. Even Debian has an opt-in usage tracker (popcon) to see wha packages they should keep supporting.
What I’m curious about is why this is included in the CLI. Why aren’t they measuring this at the API level where they wouldn’t need to disclose it to anyone? What is done locally with the GH CLI tool that doesn’t interact with the GitHub servers?
I used to believe that it was not necessary until I started building my own startup. If you dont have analytics you are flying blind. You don't know what your users actually care about and how to optimize a successful user journey. The difference between what people tell you when asked directly and how they actually use your software is actually shocking.
Wow, it really is sad how literally unthinkable it is to you and so much of the industry that you could actually talk to your users and customers like human beings instead of just data points.
And you know what happens when you reach out to talk to your customers like human beings instead of spying on them like animals? They like you more and they raise issues that your telemetry would never even think to measure.
It's called user research and client relationship management.
I think you’re overlooking that they were talking about stated and revealed preferences, a well known economic challenge where what people say is important to them and what shows up in the data is a gap. Of course you talk to users and do relationship management. That doesn’t negate the need to understand revealed preferences.
In the OSS world this is not a huge deal. You get some community that’s underserved by the product (ie software package) and they fork, modify, or build something else. If it turned out to be valuable, then you get the old solution complemented or replaced. In the business world this is an existential threat to the business - you want to make sure your users aren’t better served by a competitor who’s focusing on your blindspot.
You are inferring your own perception based on my comment, no need to be an asshole here. Like I said elsewhere we do both and they serve different purpose. We also make is very clear and easy to disable in the onboarding. I hope you try to build a business sometimes and open up your perspectives that maybe just maybe you don't have all the answers.
The problem they're trying to solve is to find out what functions of their software are most useful for people and what to invest in, and to make directions on product direction.
Yes, vendors can, do, and should talk to users, but then a lot of users don't like receiving cold messages from vendors (and some users go so far as to say that cold messages should _never_ be sent).
So, the alternative is to collect some soft telemetry to get usage metrics. As long as a company is upfront about it and provides an opt-out mechanism, I don't see a problem with it. Software projects (and the businesses around them) die if they don't make the right decisions.
As an open source author and maintainer, I very rarely hear from my users unless I put in the legwork to reach out to them so I completely identify with this.
If you have an existing financial relationship with someone it is by definition not a "cold message." People who think they should never, ever be contacted by a company they are paying to use a service of are in the extreme minority. That's "cabin in the woods with no electricity" territory.
Marketing came to the conclusion that people dont know what they actually want. They decided to lump in engineers and programmers as well, since they started abusing their goodwill.
Apple and Microsoft reached their peak usability when they employed teams of people to literally sit and watch what users did in real life (and listen to them narrating what they want to do), take notes, and ask followup questions.
Everything went to crap in the metric-based era that followed.
You're only flying blind if you make decisions not looking and thinking. Analytics isn't the only way to figure out "what your users actually care about", you can also try the old school way, commonly referred to as "Talking with people", then after taking notes, you think about it, maybe discuss with others. Don't take what people say at face value, but think about it together with your knowledge and experience, and you'll make even better product decisions than the people who are only making "data driven decisions" all the time.
Sure, you can spend the weeks to months of expensive and time consuming work it takes to get a fuzzy, half accurate and biased picture of what your users workflows look like through user interviews and surveys. Or you can look at the analytics, which tell you everything you need to know immediately, always up to date, with perfect precision.
Sometimes HN drives me crazy. From this thread you’d think telemetry is screen recording your every move and facial expression and sending it to the government. I’ve worked at places that had telemetry and it’s more along the granularity of “how many people clicked the secondary button on the third tab?” This is a far cry from “spying on users”.
> Sure, you can spend the weeks to months of expensive and time consuming work it takes to get a fuzzy, half accurate and biased picture of what your users workflows look like through user interviews and surveys. Or you can look at the analytics, which tell you everything you need to know immediately, always up to date, with perfect precision.
Yes, admittedly, the first time you do these things, they're difficult, hard and you have lots to learn. But as you do this more often, build up a knowledge base and learn about your users, you'll gain knowledge and experience you can reuse, and it'll no longer take you weeks or months of investigations to answer "Where should this button go?", you'll base it on what you already know.
So if you don't want to spend the time doing that, or as is more accurate in corporate settings, the general turnover of the team is high enough that no one is around long enough to build that deep foundational product knowledge, and to be frank most people do not care enough.
This is why telemetry happens, its faster, easier and more resilient to organizational turmoil.
> This is why telemetry happens, its faster, easier and more resilient to organizational turmoil.
I don't disagree with that, I was mainly talking about trying to deliver an experience that makes sense, is intuitive and as helpful and useful as possible, even in exchange for it taking longer time.
Of course this isn't applicable in every case, sometimes you need different tradeoffs, that's OK too. But that some favor quality over shorter implementation time shouldn't drive people crazy, it's just making different tradeoffs.
I think in terms of corporate teams this is the issue a lot of times, people just are not on the team long enough to build that knowledge. Between the constant reorgs, these days layoffs and other churn the no one puts in the years required to gain the implicit knowledge. So orgs reach for the "tenure independent knowledge base.
Exactly - purely "data driven" decisions are how we end up with ads really close to (or overlapping with) some button you want to press, because the data says that increase click-through rate! But it's actually a user-hostile feature that everyone hates.
We do both and they yield different learnings. They are complementary. We also have an issue tracking board with upvotes. I would say to your point that you can't improve what you don't measure.
If I was running a physical business and I wrote down each person’s name and credit card number and the exact time and order they placed, that would be pretty invasive and “spying”. If I write down how many units I sold of each item per day, and the volume of transactions by credit card vs cash, it’s anonymized and I don’t think this would generally be considered “spying”, just normal business metrics. How’s the latter much different than anonymized product analytics?
Watching me use my computer in my house or office is spying.
Aggregating request statistics server-side unless you're only generating those requests to spy on what I'm doing on my computer is more like the not-spying you're talking about.
> Watching me use my computer in my house or office is spying.
I agree, but once you cross the borders out to the internet, I'd say you need to stop seeing that as "Me sitting at my computer at home", because you're actually "on someone else's property" at that point essentially. And I say this as someone who care greatly about preserving personal privacy.
The logical conclusion is you’re asking for no local products and everything to run server side. It’s kind of a ridiculous position that doesn’t change the spying being done other than it’s on the other side of a browser.
> If you dont have analytics you are flying blind.
We... we are talking about a CLI tool. A CLI tool that directly uses the API. A tool which already identifies itself with a User-Agent[0].
A tool which obviously knows who is using it. What information are you gathering by running telemetry on my machine that couldn't.. just. be. a. database. query?
Reading the justification the main thing they seem to want to know is if gh is being driven by a human or an agent... Which, F off with your creepy nonsense.
Please don't just use generic "but ma analytics!" when this obviously doesn't apply here?
> The difference between what people tell you when asked directly and how they actually use your software is actually shocking.
And the difference between what they do and what they want is equally shocking. If what they want isn’t in your app, they can’t do it and it won’t show up in your data.
Quantitative data doesn’t tell you what your users want or care about. It tells you only what they are doing. You can get similar data without spying on your users.
I don’t necessarily think all data gathering is equivalent to spying, but if it’s not entirely opt-in, I think it is effectively spying no matter what you’re collecting, varying only along a dimension of invasiveness.
The totality of Microsoft's products is proof that this is false. If telemetry and analytics actually mattered for usability, every product Microsoft puts out wouldn't be good instead of garbage.
More like flying based on your knowledge as a pilot and not by the whims of your passengers.
For many CLIs and developer tooling, principled decisions need to reign. Accepting the unquantifiability of usage in a principled product is often difficult for those that are not the target demographic, but for developer tools specifically (be they programming languages, CLIs, APIs, SDKs, etc), cohesion and common sense are usually enough. It also seems real hard for product teams to accept the value of the status quo with these existing, heavily used tools.
This got me thinking: Are there prominent examples of open source projects that 1. collect telemetry, 2. without a way to opt-out (or obfuscating / making it difficult to opt-out)? This practice seems to be specific to corporate software development.
Why is it that startups and commercial software developers seem to be the only ones obsessed with telemetry? Why do they need it to "optimize user journeys" but open source projects do just fine while flying blind?
You can "optimize a successful user journey" by making the software easy to use, making it load so fast people are surprised by it, and talking to your customers. Telemetry doesn't help you do any of that, but it does help you squeeze more money out of them, or find out where you can pop an interstitial ad to goose your ad revenue, and what features you can move up a tier level to increase revenue without providing any additional value.
I think there's room for a distinction between "not using metrics" and "not using data".
Unthinkingly leaning on metrics is likely to help you build a faster, stronger horse, while at the same time avoiding building a car, a bus or a tractor.
It makes me think, what `gh` features don't generate some activity in the github API that could as easily guide feature development without adding extra telemetry?
I'm pretty ok with the github cli tool team flying blind. The tool isn't exactly a necessary part of any workflow. You don't need telemetry to glean that
> I'm curious why corporate development teams always feel the need to spy on their users? Is it not sufficient to employ good engineering and design practices?
No, because users have different needs and thoughts from the developers. And because sometimes it's hard to get good feedback from people. Maybe everyone loves the concept of feature X, but then never uses it in practice for some reason. Or a given feature has a vocal fan base that won't actually translate to sales/real usage.
> Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?
I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.
I imagine that in an alternate world, a git with telemetry would have come out with a less confusing UI because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.
I think the big problem with Telemetry is that it's too much of a black box. There is 0 transparency on how that data it really used and we have a long history of large corporates using this data to build prediction products that track people by finger printing behavior though these signals. There is too much at stake right now around this topic for people to trust any implementation.
Thankfully, github has zero control over git. If they did have control they would have sank the whole operation on year one
> because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.
How is git restore any better? Restoring what from when? At least git checkout is clear in what it does.
> How is git restore any better? Restoring what from when? At least git checkout is clear in what it does.
And this is exactly where disconnects happen, and where you need telemetry or something like it to tell you how your users actually use the system, rather than imagining how they should.
A technical user deep into the guts of Git thinks "you need to check out again this specific file".
A novice thinks "I want to restore this file to the state it had before I touched it".
Now we can argue about whether "restore" is the ideal word here, but all the same, end users tend to think it terms of "I want to undo what I did", and not in terms of git internals.
So a hypothetical git with telemetry would probably show people repeatedly trying "git restore", "git undo", "git revert", etc, trying to find an undo command.
I don't think this is worth the effort. A user either tries to understand the data structures underlying the tool or they don't. We don't market cars to babies, right? We don't pretend the car floats around—it's inherently based on engines and wheels, and the user must understand this to operate it safely. Similarly, git is inherently based around objects and graphs, and its operations should reflect this. "Restore" has simply no meaning in this world. Restore what to when in a world where time doesn't exist?
Surely, telemetry should help educate the tool maker to reveal the underlying model rather than coercing the model to reflect a bastardized worldview (as restore seems to).
This is part of why I find jujustsu so unintuitive: there is no clear model it's built around.
> A technical user deep into the guts of Git thinks "you need to check out again this specific file".
This is a fundamental misunderstanding of both the user base who are by design all technical, and the use case that this tool serves: deeply technical and specific actions to a codebase.
Git is not just software. It is also a vernacular about how to reason about code change. You can't just make arbitrary commands do magic stuff by default and then expect that vernacular to be as strong as it is today.
Those "ergonomics" you're asking for already existed in other tools like CVS and subversion, they are specifically why those tools sucked and why git was created.
Nonsense. The "git restore" command is now an official part of git, and nothing is being lost because it's technically a git-checkout underneath. It's just a thin UI on top for convenience, nothing is being sacrificed. The old commands still work just like before.
CVS and Subversion have nothing to do with this, they were extremely different to Git in the way they worked and lost for many reasons that have nothing to do with having command names understandable to normal people.
> I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.
1. git doesn’t have a UI, it’s a program run in a terminal environment. the terminal is the interface for the user.
2. git has a specific design that was intended to solve a specific problem in a specific way. mostly for linux kernel development. so, the UX might seem terrible to you — but remember that it wasn’t built for you, nor was it designed for people in their first ever coding boot camp. that was never git’s purpose.
3. the fact that every other tool was designed so poorly that everyone (eventually, mostly) jumped on git as a new standard is an expression of the importance of designing systems well.
"UI" is a category that contains GUI as well as other UIs like TUIs and CLIs. "UX" encompasses a lot of design work that can be distilled into the UI, or into app design, or into documentation, or somewhere else.
> “UX" encompasses a lot of design work that can be distilled into the UI
like how git needs you to “commit” changes as if you’re committing a change to a row in a database table? thats a design/experience issue to me, not an “it has commands” issue.
Mercurial was better than Git on almost any metric, it eludes me why it lost out to Git, perhaps because it lacked the kernel hacker aura, but also because it did not have a popular repository website with cute mascot going for it. Either way, tech history is full of examples of better designs not winning minds, due to cost, market timing, etc. And now with LLMs being trained on whatever was popular three years ago, we may be stuck with it forever.
> Mercurial was better than Git on almost any metric, it eludes me why it lost out to Git
I used Mercurial, professionally, back when there were a half-dozen serious VCS contenders, to contribute to projects that used it. I disliked it and found it unintuitive. I liked Git much better. Tastes vary.
Git made me feel like I was in control. Mercurial didn't.
Mercurial's handling of branches was focused on "one branch, one working directory", and made it hard to manage many branches within one working directory.
Mercurial's handling of branches also made it really painful to just have an arbitrary number of local experimental branches that I didn't immediately want to merge upstream. "Welcome to Mercurial, how can I merge your heads right now, you wanted to do that right now, right?"
Git's model of "a branch is just a pointer to a commit, commits internally have no idea what branch they're on" felt intuitive and comfortable.
A more intuitive git UI would reduce engagement. Do you really want to cut a 30 minute git session down to five minutes by introducing things like 'git restore' or 'git undo'? /s
> I'm curious why corporate development teams always feel the need to spy on their users
Unfortunately this is due to a large part of "decision makers" being non-technical folks, not being able to understand how the tools is actually used, as they don't use such tools themselves. So some product manager "responsible" for development tooling needs this sort of stuff to be able to perform in their job, just as some clueless product manager in the e-commerce absolutely has to overload your frontend with scripts tracking your behaviour, also to be able to perform in their job. Of course the question remains, why do those jobs exist in the first place, as the engineers were perfectly capable of designing interaction with their users before the VCs imposed the unfortunate paradigm of a deeply non-technical person somehow leading the design and development of highly technical products...So here we are, sharing our data with them, because how else will Joe collect their PM paycheck, in between prompting the AI for his slides and various "very important" meetings...
Man if I had a nickle for every time a PM asked me to violate user privacy for the purposes of making a slide that will be shown to their boss for 2.5 seconds I'd probably make enough to actually retire someday.
> Is it not sufficient to employ good engineering and design practices? Git...
Git has horrible design and ergonomics.
It is an excellent example of engineers designing interfaces for engineers without a good feedback loop.
Ironically, you just proved your point that engineers need to better understand how users are actually using their product, because their mental visualizations of how their product gets used is usually poor.
I’m curious as well. Github is one of the rare products out there that get actual valuable user feedback. So why not just ask the users for specific feedback instead of tracking all of them.
Perhaps the more interesting question is why these companies feel the need to "explain" why they are collecting telemetry or "disclose" how the data is used
The software user has no means to verify the explanation or disclosure is accurate or complete. Once the data is transferred to the company then the user has no control over where it goes, who sees it or how it is used
When the company states "We use the data for X" it is not promising to use the data for X in the future, nor does it prevent the company, or one of its "business partners", from using the data additionally for something else besides X
Just anecdotally, I get the feeling telemetry often does more harm than good, because it's too easy to misinterpret or lie with statistics. There needs to be proper statistical methodology and biases need to be considered, but this doesn't always happen. Maybe a contrived example, but someone wants to show high impact on their next performance review? Implement the new feature in such a way that everyone easily misclicks it, then show the extremely high engagement as demonstration that their work is a huge success. For Git, I'm not sure it would be widely adopted today if the development process was mainly telemetry-driven rather than Torvalds developing it based solely on his expertise and intuition.
Not to mention it's really hard to statistically tell the difference between people spending a lot of time with a feature because it's really useful or because it's really difficult to get to do what you want
Telemetry is a really poor substitute for actually observing a couple of your users. But it's cheap and feels scientific and inclusive/fair (after all you are looking at everyone)
That is just poor analytics IMO, if you have a good harness you can definitely tell if a feature is not well designed. You have to optimize for things like number of clicks to perform an operation not time spent in app.
I think the seeing the underutilized commands and flags (with real data not just a hunch) would have helped identify where users were not understanding why they should use it, and could have helped refine the interface and docs to make it gradually more usable.
I mean no solution is perfect, and some underused things are just only sometimes extremely useful, but data used smartly is not a waste of time.
The impact of a few more network calls and decreased privacy is basically never felt by users beyond this abstract "they're spying on me" realization. The impact of this telemetry for a product development team is material.
Not saying that telemetry more valuable than privacy, just that it's a straightforward decision for a company to make when real benefits are only counterbalanced by abstract privacy concerns. This is why it's so universally applied across apps and tools developed commercially.
For most CLIs, I definitely feel extra network calls because they translate to real latency for commands that _should_ be quick.
If I run "gh alias set foo bar", and that takes even a marginally perceptible amount of time, I'll feel like the tool I'm using is poorly built since a local alias obviously doesn't need network calls.
I do see that `gh` is spawning a child to do sending in the background (https://github.com/cli/cli/blob/3ad29588b8bf9f2390be652f46ee...), which also is something I'd be annoyed at since having background processes lingering in a shell's session is bad manners for a command that doesn't have a very good reason to do so.
It isn't only corporate development teams — open source development teams want to spy on their users, too. For instance, Homebrew: "Anonymous analytics allow us to prioritise fixes and features based on how, where and when people use Homebrew." [1]
> Is it not sufficient to employ good engineering and design practices?
It's not that it's insufficient, new developers, product people and designers literally don't know how to make tasteful and useful decisions without first "asking users" by experimenting on them.
Used to be you built up an intuition for your user base, but considering everyone is changing jobs every year, I guess people don't have time for that anymore, so literally every decision is "data driven" and no user is super happy or not anymore, everyone is just "OK, that's fine".
It's not the devs themselves, but the team/project/product management show that needs to pretend they are data driven, but then resort to the silliest metrics that are easy to measure.
> I'm curious why corporate development teams always feel the need to spy on their users?
I've repeatedly talked about this on HN; I call it Marketing Driven Development. It's when some Marketing manager goes to your IT manager and starts asking for things that no customer wants or needs, so they can track if their initiatives justify their job, aka are they bringing in more people to x feature?
Honestly, with something as sensitive as software developer tools, I think any sort of telemetry should ALWAYS be off by default.
> Would Git have been significantly better if it had collected telemetry
Yes, probably. Git is seriously hard to use beyond basic tasks. It has a byzantine array of commands, and the "porcelain" feels a lot closer to "plumbing" than it should. You and I are used to it, but that doesn't make it good.
I mean, it took 14 years before it gained a `switch` command! `checkout` and `reset` can do like six different things depending on how your arguments resolve, from nondestructive to very, very destructive; safe(r) operations like --force-with-lease are made harder to find than their more dangerous counterparts; it's a mess.
Analytics alone wouldn't solve the problem - you also need a team of developers who are willing to listen to their users, pore through usage data, and prioritize UX - but it would be something.
While I agree, I personally always opt out if I'm aware, and hate it when a tool suddenly gets telemetry, I don't think Git is comparable, same with Linux.
Linux and Git are fully open source, and have big companies contribute to it. If a company like Google, Microsoft etc need a feature, they can usually afford to hire someone and develop _and_ maintain this feature.
Something like gh is the opposite. It's maintained by a singular organisation, the team maintaining this has a finite resources. I don't think it's much to ask for understand what features are being used, what errors might come up, etc.
Arguably yes. git has a terrible developer experience and we've only gotten to this point where everyone embraces it through Stockholm syndrome. If someone had been looking at analytics from git, they'd have seen millions of confused people trying to find the right incantation in a forest of confusing poorly named flags.
I'm curious why corporate development teams always feel the need to spy on their users?
Because they're too shy, lazy, or socially awkward to actually ask their users questions.
They cover up this anxiety and laziness by saying that it costs too much, or it doesn't "scale." Both of these are false.
My company requires me to actually speak to the people who use the web sites I build; usually about every ten to twelve months. The company pays for my time, travel, and other expenses.
The company does this because it cares about the product. It has to, because it is beholden to the customers for its financial position, not to anonymous stock market trading bots a continent away.
Respectfully I think your argument defeats itself. If you can only speak to your users once every 10-12 months it means your process doesn't scale by definition. Good analytics (not useless vanity metrics) should allow you to spot a problem days after it was launched not wait 3 quarters for a user to air their grievances.
This is where (surprise surprise) I respect Valve. The hardware survey is opt in and transparent. They get useful info out of it and it’s just..not scummy.
There are all sorts of best practices for getting info without vacuuming up everyone’s data in opaque ways.
To be fair, you can be pretty sure they're heavily leveraging all their store data, in loads of ways. They probably sit on the biggest dataset of video game preferences for people in general, and I'm betting they make use of it heavily.
Well, you can start with everything a typical HTTP request and TCP connection comes with, surely they're already storing those things for "anti-fraud practices", wouldn't be far to imagine this data warehouse is used for analytics and product decisions as well.
I explicitly said I agree it’s a distinct possibility, but that’s not proof. If you have actual info on what they collect and how it’s used I can assess it. As it is we don’t know the extent or uses at all, we are speculating.
Personally if I don't have any evidence of something, I'll leave it unsaid if I like what Valve is doing about that something or not. Saying "We don't have evidence either way, we're just speculating, so therefore I respect what Valve does" feels like the wrong way around. But you do you :)
Git notoriously has had performance issues and did not scale and has had a horrible user interface. Both of these problems can be measured using telemetry and improvements can be measured once telemetry is in place.
> you're going to have to opt out of a lot more than this one setting
The opt-out situation for gh CLI telemetry is actually trickier than it sounds. gh runs in CI/CD pipelines and server environments where you may not want any outbound connections to github.com at all, not because of privacy but because of networking constraints. In those environments, the telemetry being on by default means your CI fails or your Bastion host can't reach GitHub at all.
Compare this to git itself, which is entirely local until you explicitly push. The trust model is different: git will never phone home unless you configure it to. gh, being a wrapper around the GitHub API, has to make those calls to function - but that's separate from whether it should also be collecting and uploading your command patterns.
If you have 3 of your developers spending 80% of their time in an area of the codebase that gets no usage and you don't see a path forward that realistically is likely to increase usage, it can be a better use of developer time to focus them elsewhere or even rethink the feature.
The problem I have with a lot of these analytics is that while there are harmless ways to use it, there is this understanding that they could be tying your unique identifier to behavioral patterns which could be used to reconstruct your identity with machine learning. It's even worse if they include timestamps.
Why not just expose exactly what telemetry is being sent when it's sent? Like add an option that makes telemetry verbose, but doesn't send it unless you enable it. That way you can evaluate it before you decide to turn it on. Whenever you do the Steam Hardware survey it'll show you what gets sent. This is the right way to do it.
Do people think that GitHub isn't already collecting and aggregating all the requests sent to their servers, which is after all the entire point of the gh CLI?
If you don't want your requests tracked, you're going to have to opt out of a lot more than this one setting.
Data is on their server, so obviously they are already doing it, they just want to increase tracking by knowing what transit as well to Gitlab, Codeberg and such by having additional client-side metrics.
I did not get that impression from these docs or from a brief look through the gh CLI codebase. Can you point to evidence that makes you believe this is used to collect metrics about requests to other services?
So happy I deployed gitea to my homelab last month. It's got an import feature from github and honestly just faster and better uptime that github. Claude can use it just fine with tea cli and git. It's pretty much a knockoff github, but I think it's better so far.
I’m running Forgejo which has the same core code and yeah it’s amazing. Faster and better uptime indeed. It even works when my internet goes down because it’s on a Pi 4 here in the cabinet next to my desk Backups are done with borg and syncthing to offsite location. It takes a bit of work setting it up but after that maintenance time is near zero. I just manually SSH in once every two weeks to check SSD space, RAM usage and run apt update and upgrade, and major version bumps
Good for GitHub. All companies need this. Some use it to improve products, some use it for less commendable goals. I know HN crowd is allergic to telemetry but if you've ever developed a software as a service, telemetry is indispensable.
Thinking out loud: what are the best practices to vet a tools' telemetry details? The devil is in the details.
A quick summary of my Claude-assisted research at the Gist below. Top of mind is some kind of trusted intermediary service with a vested interest in striking a definable middle ground that is good enough for both sides (users and product-builders)
My last job they used gh features heavily - pull requests, issues, and gha most of all. So having the cli made automating (or interacting with agents) github-specific tasks possible.
At my current job, I sometimes set up a Nix shell with the GitHub CLI, since that let's Claude Code associate a feature branch to a pull request. The LLM can then retrieve PR description, workflow results, review comments, etc.
Also, I believe GitHub Actions cache cannot be bulk deleted outside of the CLI. The first time I [hesitantly] used the gh CLI was to empty GitHub Actions cache. At the time it wasn't possible with the REST API or web interface.
gh is insanely powerful, especially if you let your coding agent use it. It’s one of my top tools. Gh lets you use GitHub features such as issues, pull request, reading CI pipelines, creating CI pipelines, etc. git is just for code version control.
PRs, and managing repos, and other things that aren't git features. You can use it to auth with GITHUB_TOKEN instead of ssh or http. Which is how my agents get access. I've switched to gitea, it's got all the same features.
dev tools and especially libraries must not have telemetry unless absolutely strictly necessary (and even then!).
* Dev tools because you need to be able to trust they don't leak while you're working. Not all sites/locations/customers/projects allow leaks, and it's easier to just blacklist anything that does leak, so you know you can trust your tools, and the same habits, justfiles, etc work everywhere.
* libraries that leak deserve a special kind of hell. You add a library to your project, and now it might be leaking without warning. If a lot of libraries decide to leak, your application is now an unmanageable sieve.
If you do need to run telemetry, make it opt in or end user only. But if you as developer don't even have control then that's the worst.
I suggest anyone who cares, and certainly anybody in the EU mails privacy@github.com and also opens a support ticket to let them know exactly what you think
Being a good regulator is about solving a nearly impossible satisficing problem. You have to follow the law and achieve achieve results with a limited budget and political constraints. Given the priorities of say the FTC, I don't think GitHub is even a blip on their radar.
I know lots of idealists -- I went to a public policy school. And in some areas, I am one myself. We need them; they can push for their causes.
But if you ever find yourself working as a regulator, you'll find the world is complicated and messy. Regulators that overreach often make things worse for their very causes they support.
If you haven't yet, go find some regulators that have to take companies all the way to court and win. I have know some in certain fields. Learn from them. Some would probably really enjoy getting to talk to a disinterested third-party to learn the domain. There are even ways to get involved as a sort of citizen journalist if you want.
But these sort of blanket calls for "make an example of GitHub" are probably a waste of time. I think a broader view is needed here. Think about the causal chain of problems and find a link where you have leverage. Then focus your effort on that link.
I live in the DC area, where ignorance of how the government works leads to people walking away and not taking you seriously. When tech people put comparable effort into understanding the machinery of government that they do into technology, that is awesome. There are some amazing examples of this if you look around.
There are no excuses. Tech people readily accept that they have to work around the warts of their infrastructure. (We are lucky because we get to rebuild so much.) But we forget what it's like to work with systems that resist change. Anyhow, we have no excuse to blame the warts in our governmental system. You either fix them or work around them or both.
The world is a big broken machine. Almost no individual person is to blame. You just have to understand where to turn the wrench.
It is interesting how GitHub sort of prominently features this non-word in their article. Perhaps some South Asian or European person for whom English is a struggle.
There is no word that means "fake-anonymous". I would assume that the author of this article intended to write "pseudonymous" which is a real word with a real definition.
The current century is the one of enshitification, like a cancer, now there is a whole generation of PM that it is totally ok and legitimate to update your product to add spying of your user's usage.
It might seems legit from them, but I'm quite sure that just listening to your user is enough. It is not like they lack an user base ready to interact with them or that they lack of bugs or features to work on.
In most cases, the telemetry is more a vanity metric that is rarely used. "Congratz to this team that did the flag that is the most used in the cli".
But even for product decision, it is hard to extract conclusions from current usage because what you can and will do today is already dependent on the way the cli is done.
A feature might not be used a lot because it is not convenient to do, or not available in a good way compared to an alternative, but usage report will not tell if it was useful or not.
In the same way, when I buy a product, often there are a lot of features that I will never use, but that I'm happy to have. And I might not have bought the product, or bought another one if it was not available. But the worse would have the manufacturer remove or disable the feature because it is not used...
I mean, make sense, of course. How else could they possibly know what users want? Run a bug tracker? Use their own software? Have more than one 9 of uptime? /s
Corporations can and will do every scummy thing permitted to them by law, so here we are. Until the US grows a backbone on issues of privacy, we shouldn't be surprised, I suppose. But the US won't be growing such a backbone anytime in the near future.
Also note that even though you get a warning about an unknown config key, the value is actually set so you're future-proof. Check `grep telemetry ~/.config/gh/config.yml`
> gh config set telemetry false
> ! warning: 'telemetry' is not a known configuration key
What's strange is if you check your `~/.config/gh/config.yml` it will put `telemetry: disabled` in there. But it will put anything in that `config.yml` lol.
> gh config set this-is-some-random-bullshit aww-shucks
> ! warning: 'this-is-some-random-bullshit' is not a known configuration key
... don't forget to recheck this info every update, restore flags that have been "accidentally" reset and set any new flags that they added for "different" telemetry
This isn’t that surprising to me. Having usage data is important for many purposes. Even Debian has an opt-in usage tracker (popcon) to see wha packages they should keep supporting.
What I’m curious about is why this is included in the CLI. Why aren’t they measuring this at the API level where they wouldn’t need to disclose it to anyone? What is done locally with the GH CLI tool that doesn’t interact with the GitHub servers?
And you know what happens when you reach out to talk to your customers like human beings instead of spying on them like animals? They like you more and they raise issues that your telemetry would never even think to measure.
It's called user research and client relationship management.
In the OSS world this is not a huge deal. You get some community that’s underserved by the product (ie software package) and they fork, modify, or build something else. If it turned out to be valuable, then you get the old solution complemented or replaced. In the business world this is an existential threat to the business - you want to make sure your users aren’t better served by a competitor who’s focusing on your blindspot.
Yes, vendors can, do, and should talk to users, but then a lot of users don't like receiving cold messages from vendors (and some users go so far as to say that cold messages should _never_ be sent).
So, the alternative is to collect some soft telemetry to get usage metrics. As long as a company is upfront about it and provides an opt-out mechanism, I don't see a problem with it. Software projects (and the businesses around them) die if they don't make the right decisions.
As an open source author and maintainer, I very rarely hear from my users unless I put in the legwork to reach out to them so I completely identify with this.
Everything went to crap in the metric-based era that followed.
Sometimes HN drives me crazy. From this thread you’d think telemetry is screen recording your every move and facial expression and sending it to the government. I’ve worked at places that had telemetry and it’s more along the granularity of “how many people clicked the secondary button on the third tab?” This is a far cry from “spying on users”.
Precision isn't accuracy and all that.
Yes, admittedly, the first time you do these things, they're difficult, hard and you have lots to learn. But as you do this more often, build up a knowledge base and learn about your users, you'll gain knowledge and experience you can reuse, and it'll no longer take you weeks or months of investigations to answer "Where should this button go?", you'll base it on what you already know.
Usage data is the ground truth.
Soliciting user feedback is invasive, and it's only possible for a certain class of questions.
This is why telemetry happens, its faster, easier and more resilient to organizational turmoil.
I don't disagree with that, I was mainly talking about trying to deliver an experience that makes sense, is intuitive and as helpful and useful as possible, even in exchange for it taking longer time.
Of course this isn't applicable in every case, sometimes you need different tradeoffs, that's OK too. But that some favor quality over shorter implementation time shouldn't drive people crazy, it's just making different tradeoffs.
I think in terms of corporate teams this is the issue a lot of times, people just are not on the team long enough to build that knowledge. Between the constant reorgs, these days layoffs and other churn the no one puts in the years required to gain the implicit knowledge. So orgs reach for the "tenure independent knowledge base.
Aggregating request statistics server-side unless you're only generating those requests to spy on what I'm doing on my computer is more like the not-spying you're talking about.
I agree, but once you cross the borders out to the internet, I'd say you need to stop seeing that as "Me sitting at my computer at home", because you're actually "on someone else's property" at that point essentially. And I say this as someone who care greatly about preserving personal privacy.
We... we are talking about a CLI tool. A CLI tool that directly uses the API. A tool which already identifies itself with a User-Agent[0].
A tool which obviously knows who is using it. What information are you gathering by running telemetry on my machine that couldn't.. just. be. a. database. query?
Reading the justification the main thing they seem to want to know is if gh is being driven by a human or an agent... Which, F off with your creepy nonsense.
Please don't just use generic "but ma analytics!" when this obviously doesn't apply here?
[0]: https://github.com/cli/cli/blob/3ad29588b8bf9f2390be652f46ee...
And the difference between what they do and what they want is equally shocking. If what they want isn’t in your app, they can’t do it and it won’t show up in your data.
Quantitative data doesn’t tell you what your users want or care about. It tells you only what they are doing. You can get similar data without spying on your users.
I don’t necessarily think all data gathering is equivalent to spying, but if it’s not entirely opt-in, I think it is effectively spying no matter what you’re collecting, varying only along a dimension of invasiveness.
Excellent point.
> but if it’s not entirely opt-in, I think it is effectively spying no matter what you’re collecting, varying only along a dimension of invasiveness.
Every web page visit is logged on the http server, and that's been the default since the mid 1990's. Is that spying?
Logging every page visited is not a technical requirement of serving the requested resource.
How will you know which page is having problems being served or is having performance problems?
More like flying based on your knowledge as a pilot and not by the whims of your passengers.
For many CLIs and developer tooling, principled decisions need to reign. Accepting the unquantifiability of usage in a principled product is often difficult for those that are not the target demographic, but for developer tools specifically (be they programming languages, CLIs, APIs, SDKs, etc), cohesion and common sense are usually enough. It also seems real hard for product teams to accept the value of the status quo with these existing, heavily used tools.
Why is it that startups and commercial software developers seem to be the only ones obsessed with telemetry? Why do they need it to "optimize user journeys" but open source projects do just fine while flying blind?
Unthinkingly leaning on metrics is likely to help you build a faster, stronger horse, while at the same time avoiding building a car, a bus or a tractor.
No, because users have different needs and thoughts from the developers. And because sometimes it's hard to get good feedback from people. Maybe everyone loves the concept of feature X, but then never uses it in practice for some reason. Or a given feature has a vocal fan base that won't actually translate to sales/real usage.
> Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?
I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.
I imagine that in an alternate world, a git with telemetry would have come out with a less confusing UI because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.
Thankfully, github has zero control over git. If they did have control they would have sank the whole operation on year one
> because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.
How is git restore any better? Restoring what from when? At least git checkout is clear in what it does.
And this is exactly where disconnects happen, and where you need telemetry or something like it to tell you how your users actually use the system, rather than imagining how they should.
A technical user deep into the guts of Git thinks "you need to check out again this specific file".
A novice thinks "I want to restore this file to the state it had before I touched it".
Now we can argue about whether "restore" is the ideal word here, but all the same, end users tend to think it terms of "I want to undo what I did", and not in terms of git internals.
So a hypothetical git with telemetry would probably show people repeatedly trying "git restore", "git undo", "git revert", etc, trying to find an undo command.
Surely, telemetry should help educate the tool maker to reveal the underlying model rather than coercing the model to reflect a bastardized worldview (as restore seems to).
This is part of why I find jujustsu so unintuitive: there is no clear model it's built around.
This is a fundamental misunderstanding of both the user base who are by design all technical, and the use case that this tool serves: deeply technical and specific actions to a codebase.
Git is not just software. It is also a vernacular about how to reason about code change. You can't just make arbitrary commands do magic stuff by default and then expect that vernacular to be as strong as it is today.
Those "ergonomics" you're asking for already existed in other tools like CVS and subversion, they are specifically why those tools sucked and why git was created.
CVS and Subversion have nothing to do with this, they were extremely different to Git in the way they worked and lost for many reasons that have nothing to do with having command names understandable to normal people.
1. git doesn’t have a UI, it’s a program run in a terminal environment. the terminal is the interface for the user.
2. git has a specific design that was intended to solve a specific problem in a specific way. mostly for linux kernel development. so, the UX might seem terrible to you — but remember that it wasn’t built for you, nor was it designed for people in their first ever coding boot camp. that was never git’s purpose.
3. the fact that every other tool was designed so poorly that everyone (eventually, mostly) jumped on git as a new standard is an expression of the importance of designing systems well.
like how git needs you to “commit” changes as if you’re committing a change to a row in a database table? thats a design/experience issue to me, not an “it has commands” issue.
This mattered because speed is the killer feature [1], and speed is often seen by users as a proxy for reliability [2].
[1]: https://bdickason.com/posts/speed-is-the-killer-feature/
[2]: https://craigmod.com/essays/fast_software/
I used Mercurial, professionally, back when there were a half-dozen serious VCS contenders, to contribute to projects that used it. I disliked it and found it unintuitive. I liked Git much better. Tastes vary.
Git made me feel like I was in control. Mercurial didn't.
Mercurial's handling of branches was focused on "one branch, one working directory", and made it hard to manage many branches within one working directory.
Mercurial's handling of branches also made it really painful to just have an arbitrary number of local experimental branches that I didn't immediately want to merge upstream. "Welcome to Mercurial, how can I merge your heads right now, you wanted to do that right now, right?"
Git's model of "a branch is just a pointer to a commit, commits internally have no idea what branch they're on" felt intuitive and comfortable.
Unfortunately this is due to a large part of "decision makers" being non-technical folks, not being able to understand how the tools is actually used, as they don't use such tools themselves. So some product manager "responsible" for development tooling needs this sort of stuff to be able to perform in their job, just as some clueless product manager in the e-commerce absolutely has to overload your frontend with scripts tracking your behaviour, also to be able to perform in their job. Of course the question remains, why do those jobs exist in the first place, as the engineers were perfectly capable of designing interaction with their users before the VCs imposed the unfortunate paradigm of a deeply non-technical person somehow leading the design and development of highly technical products...So here we are, sharing our data with them, because how else will Joe collect their PM paycheck, in between prompting the AI for his slides and various "very important" meetings...
Git has horrible design and ergonomics.
It is an excellent example of engineers designing interfaces for engineers without a good feedback loop.
Ironically, you just proved your point that engineers need to better understand how users are actually using their product, because their mental visualizations of how their product gets used is usually poor.
The software user has no means to verify the explanation or disclosure is accurate or complete. Once the data is transferred to the company then the user has no control over where it goes, who sees it or how it is used
When the company states "We use the data for X" it is not promising to use the data for X in the future, nor does it prevent the company, or one of its "business partners", from using the data additionally for something else besides X
Why "explain" the reason for collecting telemetry
Why "disclose" how the data is used
What does this accomplish
I'm not sure if you're implying it's obvious but it's not obvious to me that it would be unhelpful.
Telemetry is a really poor substitute for actually observing a couple of your users. But it's cheap and feels scientific and inclusive/fair (after all you are looking at everyone)
I mean no solution is perfect, and some underused things are just only sometimes extremely useful, but data used smartly is not a waste of time.
Not saying that telemetry more valuable than privacy, just that it's a straightforward decision for a company to make when real benefits are only counterbalanced by abstract privacy concerns. This is why it's so universally applied across apps and tools developed commercially.
If I run "gh alias set foo bar", and that takes even a marginally perceptible amount of time, I'll feel like the tool I'm using is poorly built since a local alias obviously doesn't need network calls.
I do see that `gh` is spawning a child to do sending in the background (https://github.com/cli/cli/blob/3ad29588b8bf9f2390be652f46ee...), which also is something I'd be annoyed at since having background processes lingering in a shell's session is bad manners for a command that doesn't have a very good reason to do so.
[1] https://docs.brew.sh/Analytics
1. It's anonymous
2. They're telling you they're doing it
3. You can opt out of it
It's not that it's insufficient, new developers, product people and designers literally don't know how to make tasteful and useful decisions without first "asking users" by experimenting on them.
Used to be you built up an intuition for your user base, but considering everyone is changing jobs every year, I guess people don't have time for that anymore, so literally every decision is "data driven" and no user is super happy or not anymore, everyone is just "OK, that's fine".
I've repeatedly talked about this on HN; I call it Marketing Driven Development. It's when some Marketing manager goes to your IT manager and starts asking for things that no customer wants or needs, so they can track if their initiatives justify their job, aka are they bringing in more people to x feature?
Honestly, with something as sensitive as software developer tools, I think any sort of telemetry should ALWAYS be off by default.
Yes, probably. Git is seriously hard to use beyond basic tasks. It has a byzantine array of commands, and the "porcelain" feels a lot closer to "plumbing" than it should. You and I are used to it, but that doesn't make it good.
I mean, it took 14 years before it gained a `switch` command! `checkout` and `reset` can do like six different things depending on how your arguments resolve, from nondestructive to very, very destructive; safe(r) operations like --force-with-lease are made harder to find than their more dangerous counterparts; it's a mess.
Analytics alone wouldn't solve the problem - you also need a team of developers who are willing to listen to their users, pore through usage data, and prioritize UX - but it would be something.
Linux and Git are fully open source, and have big companies contribute to it. If a company like Google, Microsoft etc need a feature, they can usually afford to hire someone and develop _and_ maintain this feature.
Something like gh is the opposite. It's maintained by a singular organisation, the team maintaining this has a finite resources. I don't think it's much to ask for understand what features are being used, what errors might come up, etc.
Sincerely, a Mercurial user from way back.
Because they're too shy, lazy, or socially awkward to actually ask their users questions.
They cover up this anxiety and laziness by saying that it costs too much, or it doesn't "scale." Both of these are false.
My company requires me to actually speak to the people who use the web sites I build; usually about every ten to twelve months. The company pays for my time, travel, and other expenses.
The company does this because it cares about the product. It has to, because it is beholden to the customers for its financial position, not to anonymous stock market trading bots a continent away.
Now, let's replicate this with GitHub. What can go wrong?
There are all sorts of best practices for getting info without vacuuming up everyone’s data in opaque ways.
I’m not saying they don’t engage in any of those practices, I am specifically talking about the hardware survey.
Well, you can start with everything a typical HTTP request and TCP connection comes with, surely they're already storing those things for "anti-fraud practices", wouldn't be far to imagine this data warehouse is used for analytics and product decisions as well.
I explicitly said I agree it’s a distinct possibility, but that’s not proof. If you have actual info on what they collect and how it’s used I can assess it. As it is we don’t know the extent or uses at all, we are speculating.
The hardware survey is not that.
The opt-out situation for gh CLI telemetry is actually trickier than it sounds. gh runs in CI/CD pipelines and server environments where you may not want any outbound connections to github.com at all, not because of privacy but because of networking constraints. In those environments, the telemetry being on by default means your CI fails or your Bastion host can't reach GitHub at all.
Compare this to git itself, which is entirely local until you explicitly push. The trust model is different: git will never phone home unless you configure it to. gh, being a wrapper around the GitHub API, has to make those calls to function - but that's separate from whether it should also be collecting and uploading your command patterns.
The problem I have with a lot of these analytics is that while there are harmless ways to use it, there is this understanding that they could be tying your unique identifier to behavioral patterns which could be used to reconstruct your identity with machine learning. It's even worse if they include timestamps.
Why not just expose exactly what telemetry is being sent when it's sent? Like add an option that makes telemetry verbose, but doesn't send it unless you enable it. That way you can evaluate it before you decide to turn it on. Whenever you do the Steam Hardware survey it'll show you what gets sent. This is the right way to do it.
> Removes the env var that gates telemetry, so it will be on by default.
If you don't want your requests tracked, you're going to have to opt out of a lot more than this one setting.
Embrace, extend, extinguish.
The first two have been done.
I give it five years before the GH CLI is the only way to interact with GitHub repos.
Then the third will also be done, and the cycle is complete.
I'll take that bet. How much are you willing to put on it?
A quick summary of my Claude-assisted research at the Gist below. Top of mind is some kind of trusted intermediary service with a vested interest in striking a definable middle ground that is good enough for both sides (users and product-builders)
Gist: WIP
P.S. You look like villain from Temu.
For example
will run and poll the CI checks of a PR and exit 0 once they all passAlso, I believe GitHub Actions cache cannot be bulk deleted outside of the CLI. The first time I [hesitantly] used the gh CLI was to empty GitHub Actions cache. At the time it wasn't possible with the REST API or web interface.
* Dev tools because you need to be able to trust they don't leak while you're working. Not all sites/locations/customers/projects allow leaks, and it's easier to just blacklist anything that does leak, so you know you can trust your tools, and the same habits, justfiles, etc work everywhere.
* libraries that leak deserve a special kind of hell. You add a library to your project, and now it might be leaking without warning. If a lot of libraries decide to leak, your application is now an unmanageable sieve.
If you do need to run telemetry, make it opt in or end user only. But if you as developer don't even have control then that's the worst.
Today I use a Golang CLI made with ~200K LOC to do essentially the same thing. Yay, efficiency?
Regulators should wake up and fine them hard, so hard to become existential. Make an example for others not to follow.
I know lots of idealists -- I went to a public policy school. And in some areas, I am one myself. We need them; they can push for their causes.
But if you ever find yourself working as a regulator, you'll find the world is complicated and messy. Regulators that overreach often make things worse for their very causes they support.
If you haven't yet, go find some regulators that have to take companies all the way to court and win. I have know some in certain fields. Learn from them. Some would probably really enjoy getting to talk to a disinterested third-party to learn the domain. There are even ways to get involved as a sort of citizen journalist if you want.
But these sort of blanket calls for "make an example of GitHub" are probably a waste of time. I think a broader view is needed here. Think about the causal chain of problems and find a link where you have leverage. Then focus your effort on that link.
I live in the DC area, where ignorance of how the government works leads to people walking away and not taking you seriously. When tech people put comparable effort into understanding the machinery of government that they do into technology, that is awesome. There are some amazing examples of this if you look around.
There are no excuses. Tech people readily accept that they have to work around the warts of their infrastructure. (We are lucky because we get to rebuild so much.) But we forget what it's like to work with systems that resist change. Anyhow, we have no excuse to blame the warts in our governmental system. You either fix them or work around them or both.
The world is a big broken machine. Almost no individual person is to blame. You just have to understand where to turn the wrench.
https://en.wiktionary.org/w/index.php?search=pseudoanonymous...
It is interesting how GitHub sort of prominently features this non-word in their article. Perhaps some South Asian or European person for whom English is a struggle.
There is no word that means "fake-anonymous". I would assume that the author of this article intended to write "pseudonymous" which is a real word with a real definition.
https://en.wiktionary.org/wiki/pseudonymous
But it would also be interesting if they very much intended the ambiguity of using a non-word that is more than it seems on the surface.
the old git command in your terminal
I think I'll keep using that
It might seems legit from them, but I'm quite sure that just listening to your user is enough. It is not like they lack an user base ready to interact with them or that they lack of bugs or features to work on.
In most cases, the telemetry is more a vanity metric that is rarely used. "Congratz to this team that did the flag that is the most used in the cli". But even for product decision, it is hard to extract conclusions from current usage because what you can and will do today is already dependent on the way the cli is done. A feature might not be used a lot because it is not convenient to do, or not available in a good way compared to an alternative, but usage report will not tell if it was useful or not. In the same way, when I buy a product, often there are a lot of features that I will never use, but that I'm happy to have. And I might not have bought the product, or bought another one if it was not available. But the worse would have the manufacturer remove or disable the feature because it is not used...
Corporations can and will do every scummy thing permitted to them by law, so here we are. Until the US grows a backbone on issues of privacy, we shouldn't be surprised, I suppose. But the US won't be growing such a backbone anytime in the near future.
export GH_TELEMETRY=false
export DO_NOT_TRACK=true
gh config set telemetry disabled (starting from version 2.91.0, which this announcement refers to)
gh version 2.90.0 (2026-04-16) https://github.com/cli/cli/releases/tag/v2.90.0
$ gh config set telemetry disabled
! warning: 'telemetry' is not a known configuration key
Also note that even though you get a warning about an unknown config key, the value is actually set so you're future-proof. Check `grep telemetry ~/.config/gh/config.yml`
What's strange is if you check your `~/.config/gh/config.yml` it will put `telemetry: disabled` in there. But it will put anything in that `config.yml` lol.
> gh config set this-is-some-random-bullshit aww-shucks > ! warning: 'this-is-some-random-bullshit' is not a known configuration key
But in my config.yml is
this-is-some-random-bullshit: aww-shucks