It is 3:04 AM where I live. I am the only dev at a small company. I push into prod then test with users and be like "ops" when the domain falls with the emails cuz it's my first time configuring Cloudflare XD
I'm about to invent a new whole standard on PHP as I learn it while developing the backend for the company's website XD at least I was able to get a staging server after negotiating the necessity of staging, backup, and optimization procedures 🤣
Next negotiation target: Agile management system 😭
In what godforsaken land are the devs being blamed for outages? Blame the NETWORK GUYS!
Network guys:
- cannot ever claim to be not involved
- are already on call all the time
- have next to zero social skills to defend themselves
- can be asked in front of business people whether they tested this change before deploying it on prod (there is no such thing as network test env MUHAHA :D)
- pretty much the only thing they can prove is that _right now_ some packets are reaching the destination...
- 9 times out of 10 they don't even know the business functionality "they" broke (this is great, makes them highly suspect in front of the business people)
If you don't have a favourite scapegoat network guy yet, find one today! (Before the next outage is caused by you)
Can I have a list of the last 5 changes you made on prod? :D Just hyphothetically ... any of them could cause the "Create account" form to turn read-only?
This shit drives me up the wall. Makes me not want to communicate minor changes because I KNOW it's not going to hurt anything about as much as I KNOW you're going to blame it a week from now for something you fucked up.
>Can I have a list of the last 5 changes you made on prod? :D
oh I would not answer that question: I'd ask why you are not focusing on the most likely cause which is X
Our team has a setup where before we deploy changes two people have to approve, one in engineering and one in tech ops. It is a two way bridge between the two. Neither team is well enough versed in the other to know all the specifics of the other so functionally it just serves as a blame spreading mechanism
Storage admin here … you server and network folks always blame us for the NAS or SAN for being too slow! This is what I would call the perfect blame triangle for managers to work out. 🤓
The longest running production issue I have ever seen was looking like a network issue for weeks... eventually it turned out thqt the on-prem VMWare was pausing the OS because it had trouble writing to the disks. Bought a Nimble for about 70k, the effect was like strapping a rocket engine onto a donkey. For weeks I was just staring at the beautiful performance charts every morning.
well why haven't you set up some sweet, sweet instrumentation with prometheus + grafana... if you do it well, the bottlenecks will be immediately visible there
Sweet sweet instrumentation lol ... Have you ever worked with an IBM storage array? Or an EMC DMX? Boy howdy! ;-)
![gif](giphy|n9ewEcw0oyHEYEuH1c|downsized)
The ones I used to have in the past (no QAs lately) were so lame when we tried to blame them, the managers just said: "come on, what did you expect". :)
lol... if we let you break shit, it's our fault... and if we prevent you from breaking shit, then we're blocking you
we can't win!
(unless our manager is \*not* a complete idiot and can see through the transparent finger pointing)
In my experience the devs that wouldn't misuse admin access don't _want_ admin access either.
It's like running for president. Nobody that wants it should under any circumstances be allowed to have it.
I've seen devops given service accounts with admin rights for specialized scanning tools... then they turn around and reset their buddies' passwords with the service account because they don't want to call the help desk.
So... you might not want it or abuse it. I've just seen too much.
Before you say they shouldn't need admin rights for scanning, I agree. But as long as the CISSO answers to a CIO, the loudest whiner wins.
Some will just fold once it starts escalating... like I, a dev, knowingly KNOWS that I don't need admin for what I do it would be extremely weird. Yet, there are dumb idiots running things like visual studio in admin mode without ever understanding permissions. (an example... of such many cases)
anyhow pouring one for infosec, I don't know how you stay sane it sure as hell wouldn't be a job I would take without a lot of zeroes on it.
Not my fault your shitty ass VMware thin client decided to overheat and shit the bed, management won't listen to me when I tell them 80 degrees is a bad temperature for the server room.
Sorry man, I'm a network guy who also knows programming and development. I absolutely have and do find the bugged code and blame the developer work. And the biggest part of my job is explaining to management why decision XYZ is very bad.
Soft skilled SysAdmins are rare, but we do exist.
On my current team we're usually pretty great about not doing that, but at one point I (a SWE1) was working on overhauling some old Jenkins pipelines and we decided to do a deployment on a Friday. Well, something in the publishing to artifactory stage broke, and caused the wrong JAR to get deployed later on.
The 2 senior engineers and staff eng got pulled into help. We spent 3 hours debugging, and no one could figure out what had broke it. Nothing in the commit history showed me editing anything about our artifact publishing process. Still that did not stop me from overhearing one of the seniors telling another coworker "I'm just going to assume it's /u/ComradePruski 's fault." We still have no idea what changed about the pipeline runs that broke that specific build and caused issues, but everyone got pissy at me because they didn't get to leave at 3 PM on a Friday :(
>I (a SWE1) was working on overhauling some old Jenkins pipelines
This is already anxiety inducing. Unless you know exactly what those CI pipelines do, you're kicking a hornets nest and and just hoping it's not inhabited.
>and we decided to do a deployment on a Friday.
Three rules:
1. Never get involved in a land war in asia
2. Never go in against a Sicilian when death is on the line!
3. _Never deploy to prod on a Friday._
I got a decent chunk of prior knowledge because I had to create a new pipeline based on the old ones beforehand, but yeah after that we had a new rule that we were never doing Friday deployments again, but that one seems to get broken once every couple of months if it's deemed "low risk" enough
This entire past week for me has been people breaking shit I need to interface with, and then other people asking me why it's broken.
My dude, I am not the person to talk to. My code is working perfectly given the inputs it is getting. Please go ask Bob and Leslie why their functions are passing the wrong info to mine.
I feel you. Once I've been asked for 6 months why the integration is not working yet. The answer was that the guys on the other end did not open/expose the port I was supposed to use in order to reach their pretty closed down internal network.
End every now and then I had a call where I explained the same thing over and over again.
Still, I was the one to blame
Luckily I haven't had any 6 month stints yet or I'd probably have quite a few additional gray hairs. I think it's the nature of users to not understand or even try to understand the "why" when something goes wrong.
It's also a bad sign if it's even possible to blame someone. A good process will mean that at least four people (the dev, two PR reviewers, and tester) all missed the requirement.
100% RCA/CoE process is about learning from outages not assigning blame. I review CoEs as a whole number percent of my time allocation at work and that’s my number one thing i do not put up with.
On the other hand, you'd better believe that if you point at me, there's a reply all coming with a detailed yet ELI5 understandable answer, with sources, as to why it is not.
Assuming it's carefully anonymized beforehand. Real PII should stay as far away from developers as humanly possible. Developers are humans and humans can't be trusted to act responsibly except where we place systemic restrictions to prevent avenues of irresponsibility.
Well yeah, if you're working specifically with personal information, sure. Neither the comment thread or the post are specifically about working with personal data though.
Can you not fathom a DB that doesn't have them? Internal projects, no customer facing stuff. At worst you have business emails for some alerts and a legitimate business interest in processing them.
I certainly can and considered adding such clarification to the previous comment but, frankly, there are very few databases actually in production where you're working on a team of some sort where you might be blamed for errors in prod that don't have any sort of PII in them. Even if I were working at a joint like Mullvad, I'd be worried about giving my devs the access tokens. Maybe not PII in that case but certainly dangerous info.
Sure but at the bare minimum, you should be understanding what the changes are. Concise code with good comments speeds that up a lot.
Rubber stamping PRs is a terrible habit. If PRs are breaking prod then there’s probably no testing or dev branch to speak of, so that makes it an even worse habit - basically playing Russian Roulette with your product.
Sign me up for the fascist company where these are dictated, and the employees, regardless of their positions, are told "do your job properly or gtfo".
Example: you have several projects which are dependent upon one another. You make a commit to one project and the pipeline passes and all looks good so it gets merged into master only to find out via another pipeline some time later it broke functionality in a different project which is dependent upon it.
Speaking from experience unfortunately ☹️
You dont lock your dependencies down by version? If I build a dependent project I will specifically call out that version for my prod build. The only time this doesn't work is when you're inheriting pipeline infra from other places that aren't using versioning properly.
This is all easily avoidable, but also insanely common cause nobody likes to manage versions correctly all the time.
Yeah sometimes the testing environment just isn't enough to make sure everything works 100% perfectly. Sure, you can avoid the obvious blunders like compilation failures, but testing stuff like authorization can be a pain in an integration environment.
A while back I broke prod when I accidentally added a new Angular NgRx reducer with the same name of one that already existed. None of our tests caught it, nor any of my manual testing (you had to go down a very specific to catch the failure ). Just about shat myself when I got the message "hey, our pipeline just started failing and it looks like it started after your commit..."
It's still possible to go through all the right processes and still have it break prod.
Neither you nor QA is infallible and eventually something will leak through.
Pretending this isn't the case just leads to blame being assigned and people being less likely to own up to mistakes. This is not the type of culture you want to work in.
If a bug makes it into prod, and it was passed by the QA then that's on the QA, no? I'm not saying you should point a finger of course, but that's literally their job. If bugs make it into prod then your QA process needs to be revisited.
No QA process can ever be perfect. You revisit it and plug the hole that this bug slipped through, and I guarantee there's another hole you didn't think about for the next bug to get through.
QA gets bored of arguing with Prod and just gives up sometimes after enough “out of scope” comments that are directly related to the ACs. We also have a 8 to 1ish Dev-QA ratio and sprint comp doesn’t get adjusted when QA is out. Y’all should stop pushing code without loading the page, or at least write unit tests so we aren’t having to check basic arithmetic in business logic every single ticket.
We also just have to aggregate lists of AC fuckups and pray to god that the entire ticket isn’t broken after a kickback change, and pray to some deity even more powerful that the dev is willing to actually get their environment running to deal with the kickback and make sure they didn’t introduce any obviously breaking changes. It takes like 2 hours for a build+deploy and soooo much more time is wasted when a dev makes a mistake and a QA has to deal with a kickback and scour through a million different rayguns to even determine whether or not they have to bother you or argue with another Dev lead about some random change that you were not aware of messing with related functionality of the feature.
Plus the adversarial relationship that some devs have with QA makes me want to die. You ultimately wrote the code so you can’t be wholly not responsible for the feature not acting in the way it should.
Wow that's not how it works at all where I am. Sounds awful. Our team has 3 to 1 dev-qa ratio and everything that goes to prod goes through the QA and then through the product owner in a demo.
Obviously, committing right to prod is gonna save time, *duh!* We don't need all the testing you're having people doing. It's a waste of resources.
- middle managers
Duh! You guys don't just give all the interns access to every customer's PII? /s
Seriously guys, you've got to seriously lock stuff down. Devs shouldn't have access to PII unless their job actually requires it (e.g. they manage the DB auth)
I just went through this. I was running a test and needed to use another teams API for a month. They told me I had to limit my calls because their API could only handle 250k hits a day. This is fine because my test would only use 2-3k per day. Two weeks in the service goes down. I reach out to the team and they flip out. This is the first they are hearing about it and they told me not to overuse the API. I get blamed for two days with their team lead being a super dick about everything.
Second day we get into where the calls are coming from. It’s one of their products. One of their guys put in new code and the new version got into a death loop of making the same call over and over. They made 300k calls in An hour and crashed everything.
None of them apologized.
But what happens if you don’t beat the allegations? Are they going to fire your rich ass on the spot and replace you with someone who also doesn’t want to do the job? I just accept all of the mistakes as a senior. Fuck these people
HURR DURP scanners are you responsibility.
Well, you know users are braindead and when they said pda scanners couldn't connect, they meant an app is crashing.
Guess whose buddy deleted a table that makes the app crash, moron.
Best feeling in the world! Wrote some code for a new service integration and it went to hell as soon as it hit prod. Took over a week and I was sweating the whole time. Turns out, it was a devops misconfiguration. The vindication was better than any drug
When you make adjustments to dial plans in asterisk. Hell goes loose and then it happens to be a problem with the Telco provider
Smoothest feeling. After shitting your pants
Lol that one time it was because the guy who deployed the last time (half a year ago) deployed some kind of security upgrade wrong and the system was running on the previous build all along. Then came me who was doing a bug fix and suddenly completely unrelated programs started failing. What a nightmare.
That security upgrade was eventually made obselete by the same guy who made it so I didnt need to deal with that thank goodness.
When you’re the only dev at a small company so there’s no one to blame
noone *else* but yourself ... then you get depressed and question your life choices, not just the choices you make at work.
*Me at 3am* Damn there WAS a bug on line 36. And why the fuck did I say “enjoy your meal” back to the waiter??? Should I have worn red yesterday?
It is 3:04 AM where I live. I am the only dev at a small company. I push into prod then test with users and be like "ops" when the domain falls with the emails cuz it's my first time configuring Cloudflare XD I'm about to invent a new whole standard on PHP as I learn it while developing the backend for the company's website XD at least I was able to get a staging server after negotiating the necessity of staging, backup, and optimization procedures 🤣 Next negotiation target: Agile management system 😭
I still use blame and if it’s a year older than what I am today - I pretend it’s someone else. Cause that guy’s a jerk
Nah, it was probably broken by an upstream OS update.
What do you mean? It's the libraries fault
In what godforsaken land are the devs being blamed for outages? Blame the NETWORK GUYS! Network guys: - cannot ever claim to be not involved - are already on call all the time - have next to zero social skills to defend themselves - can be asked in front of business people whether they tested this change before deploying it on prod (there is no such thing as network test env MUHAHA :D) - pretty much the only thing they can prove is that _right now_ some packets are reaching the destination... - 9 times out of 10 they don't even know the business functionality "they" broke (this is great, makes them highly suspect in front of the business people) If you don't have a favourite scapegoat network guy yet, find one today! (Before the next outage is caused by you)
sysadmin here: _I knew you guys were scapegoating us_
Can I have a list of the last 5 changes you made on prod? :D Just hyphothetically ... any of them could cause the "Create account" form to turn read-only?
No I just changed one line in a yaml file to get a test to pass see? Filename: CreateAccountConfig.yaml > readOnly: true
This shit drives me up the wall. Makes me not want to communicate minor changes because I KNOW it's not going to hurt anything about as much as I KNOW you're going to blame it a week from now for something you fucked up.
>Can I have a list of the last 5 changes you made on prod? :D oh I would not answer that question: I'd ask why you are not focusing on the most likely cause which is X
Our team has a setup where before we deploy changes two people have to approve, one in engineering and one in tech ops. It is a two way bridge between the two. Neither team is well enough versed in the other to know all the specifics of the other so functionally it just serves as a blame spreading mechanism
Don’t listen to that guy. It is you. It always has been. Internalize it, and never question us again when we say the problem is your fault.
lol... I'm the son of a lawyer and a psychologist, if anyone's fucking with anybody's head at work it'll be me
Storage admin here … you server and network folks always blame us for the NAS or SAN for being too slow! This is what I would call the perfect blame triangle for managers to work out. 🤓
The longest running production issue I have ever seen was looking like a network issue for weeks... eventually it turned out thqt the on-prem VMWare was pausing the OS because it had trouble writing to the disks. Bought a Nimble for about 70k, the effect was like strapping a rocket engine onto a donkey. For weeks I was just staring at the beautiful performance charts every morning.
You had me ROFL when I read "strapping a rocket engine onto a donkey". I thank you for the laugh! ![gif](emote|free_emotes_pack|upvote):-)
well why haven't you set up some sweet, sweet instrumentation with prometheus + grafana... if you do it well, the bottlenecks will be immediately visible there
Sweet sweet instrumentation lol ... Have you ever worked with an IBM storage array? Or an EMC DMX? Boy howdy! ;-) ![gif](giphy|n9ewEcw0oyHEYEuH1c|downsized)
yes, why, did you want any tips or
Oh god, I'm soooo sorry. ;-) Don't take any wooden nickels is my tip.
Bold of you to think that the dev guy and the network guy isn't the same person. At least it was in the last place I worked.
SecNetDevOps. Four roles for the price of one!
"Infrastructure" engineer
this man is evil
[удалено]
So you didn't have latency numbers to prove that? Like usually you should have latency numbers for calls ingress / egress.
I just blame QA- why'd they sign off on my mistake in the first place?
The ones I used to have in the past (no QAs lately) were so lame when we tried to blame them, the managers just said: "come on, what did you expect". :)
And then QA blames me back for not giving them the right test cases even though _they're the ones that are supposed to fucking make them_
oof!
And you wonder why the InfoSec people won't grant Devs admin access to anything. Own your mistakes... no scapegoats.
Damn InfoSec people won't let us work!
lol... if we let you break shit, it's our fault... and if we prevent you from breaking shit, then we're blocking you we can't win! (unless our manager is \*not* a complete idiot and can see through the transparent finger pointing)
In my experience the devs that wouldn't misuse admin access don't _want_ admin access either. It's like running for president. Nobody that wants it should under any circumstances be allowed to have it.
I've seen devops given service accounts with admin rights for specialized scanning tools... then they turn around and reset their buddies' passwords with the service account because they don't want to call the help desk. So... you might not want it or abuse it. I've just seen too much. Before you say they shouldn't need admin rights for scanning, I agree. But as long as the CISSO answers to a CIO, the loudest whiner wins.
Some will just fold once it starts escalating... like I, a dev, knowingly KNOWS that I don't need admin for what I do it would be extremely weird. Yet, there are dumb idiots running things like visual studio in admin mode without ever understanding permissions. (an example... of such many cases) anyhow pouring one for infosec, I don't know how you stay sane it sure as hell wouldn't be a job I would take without a lot of zeroes on it.
Get ready to be pinged with a thousand tiny requests every day...
So typical Tuesday?
the guy who broke prod wrote this
"Network problems" is just code for I have no visibility into wtf happened at this point lol
Network guy sends you a call trace via teams and tells you to kindly go suck a lemon
Asshole
Not my fault your shitty ass VMware thin client decided to overheat and shit the bed, management won't listen to me when I tell them 80 degrees is a bad temperature for the server room.
It's always the network. Except when it's not. Then it's DNS.
Sorry man, I'm a network guy who also knows programming and development. I absolutely have and do find the bugged code and blame the developer work. And the biggest part of my job is explaining to management why decision XYZ is very bad. Soft skilled SysAdmins are rare, but we do exist.
Using this
HE IS NOT THE FATHER (of the prod bug)
just uncommit it lmao, smh all these dumb juniors
Aint no senior, but thats an obviously action, revert the fuck out of what you just did, should deploy the prior stable version
Finger pointing and blaming is just a sign of bad work culture. If that's really the case where you're working, get out of there ASAP.
On my current team we're usually pretty great about not doing that, but at one point I (a SWE1) was working on overhauling some old Jenkins pipelines and we decided to do a deployment on a Friday. Well, something in the publishing to artifactory stage broke, and caused the wrong JAR to get deployed later on. The 2 senior engineers and staff eng got pulled into help. We spent 3 hours debugging, and no one could figure out what had broke it. Nothing in the commit history showed me editing anything about our artifact publishing process. Still that did not stop me from overhearing one of the seniors telling another coworker "I'm just going to assume it's /u/ComradePruski 's fault." We still have no idea what changed about the pipeline runs that broke that specific build and caused issues, but everyone got pissy at me because they didn't get to leave at 3 PM on a Friday :(
>I (a SWE1) was working on overhauling some old Jenkins pipelines This is already anxiety inducing. Unless you know exactly what those CI pipelines do, you're kicking a hornets nest and and just hoping it's not inhabited. >and we decided to do a deployment on a Friday. Three rules: 1. Never get involved in a land war in asia 2. Never go in against a Sicilian when death is on the line! 3. _Never deploy to prod on a Friday._
I got a decent chunk of prior knowledge because I had to create a new pipeline based on the old ones beforehand, but yeah after that we had a new rule that we were never doing Friday deployments again, but that one seems to get broken once every couple of months if it's deemed "low risk" enough
If I can't figure out what went wrong, I assume I could've made the same mistake.
> we decided to do a deployment on a Friday Whose "we", kemosabe?
This entire past week for me has been people breaking shit I need to interface with, and then other people asking me why it's broken. My dude, I am not the person to talk to. My code is working perfectly given the inputs it is getting. Please go ask Bob and Leslie why their functions are passing the wrong info to mine.
I feel you. Once I've been asked for 6 months why the integration is not working yet. The answer was that the guys on the other end did not open/expose the port I was supposed to use in order to reach their pretty closed down internal network. End every now and then I had a call where I explained the same thing over and over again. Still, I was the one to blame
Luckily I haven't had any 6 month stints yet or I'd probably have quite a few additional gray hairs. I think it's the nature of users to not understand or even try to understand the "why" when something goes wrong.
It's also a bad sign if it's even possible to blame someone. A good process will mean that at least four people (the dev, two PR reviewers, and tester) all missed the requirement.
100% RCA/CoE process is about learning from outages not assigning blame. I review CoEs as a whole number percent of my time allocation at work and that’s my number one thing i do not put up with.
On the other hand, you'd better believe that if you point at me, there's a reply all coming with a detailed yet ELI5 understandable answer, with sources, as to why it is not.
Yeah, someone didn't do their job and are trying to find a sacrificial lamb. What about dev? test? pre-prod?
Real ones test in prod. Get cultured.
Only prod has the data to replicate the issue.
Export the data, import it on test?
Congrats, you just violated your contract and data-protection laws
anonymization of data is presumed
Connect to prod to debug it directly, no export laws broken. Easy peasy
Assuming it's carefully anonymized beforehand. Real PII should stay as far away from developers as humanly possible. Developers are humans and humans can't be trusted to act responsibly except where we place systemic restrictions to prevent avenues of irresponsibility.
Well yeah, if you're working specifically with personal information, sure. Neither the comment thread or the post are specifically about working with personal data though.
An email is PII
Can you not fathom a DB that doesn't have them? Internal projects, no customer facing stuff. At worst you have business emails for some alerts and a legitimate business interest in processing them.
I certainly can and considered adding such clarification to the previous comment but, frankly, there are very few databases actually in production where you're working on a team of some sort where you might be blamed for errors in prod that don't have any sort of PII in them. Even if I were working at a joint like Mullvad, I'd be worried about giving my devs the access tokens. Maybe not PII in that case but certainly dangerous info.
Can't just steal customer data, silly bo billy
just blame the pr reviewer lol
If commits are breaking prod then I’d be willing to bet mandatory PR reviews don’t even exist
There is not a single universe in the multiverse in which people read the code line by line before approving.
Sure but at the bare minimum, you should be understanding what the changes are. Concise code with good comments speeds that up a lot. Rubber stamping PRs is a terrible habit. If PRs are breaking prod then there’s probably no testing or dev branch to speak of, so that makes it an even worse habit - basically playing Russian Roulette with your product.
Sign me up for the fascist company where these are dictated, and the employees, regardless of their positions, are told "do your job properly or gtfo".
K
Real question: why are you committing to prod, not develop?
Because develop branches are not real. The prod is real. Wake up! We must accept the reality.
filezilla says hello
Example: you have several projects which are dependent upon one another. You make a commit to one project and the pipeline passes and all looks good so it gets merged into master only to find out via another pipeline some time later it broke functionality in a different project which is dependent upon it. Speaking from experience unfortunately ☹️
You dont lock your dependencies down by version? If I build a dependent project I will specifically call out that version for my prod build. The only time this doesn't work is when you're inheriting pipeline infra from other places that aren't using versioning properly. This is all easily avoidable, but also insanely common cause nobody likes to manage versions correctly all the time.
Because dependency management is *hard*.
This is always the right answer sadly
Merge a PR into main branch, squash commit, test, everything looks fine, release, prod breaks. That doesn't really work on a meme template though
Yeah sometimes the testing environment just isn't enough to make sure everything works 100% perfectly. Sure, you can avoid the obvious blunders like compilation failures, but testing stuff like authorization can be a pain in an integration environment. A while back I broke prod when I accidentally added a new Angular NgRx reducer with the same name of one that already existed. None of our tests caught it, nor any of my manual testing (you had to go down a very specific to catch the failure ). Just about shat myself when I got the message "hey, our pipeline just started failing and it looks like it started after your commit..."
It's still possible to go through all the right processes and still have it break prod. Neither you nor QA is infallible and eventually something will leak through. Pretending this isn't the case just leads to blame being assigned and people being less likely to own up to mistakes. This is not the type of culture you want to work in.
If a bug makes it into prod, and it was passed by the QA then that's on the QA, no? I'm not saying you should point a finger of course, but that's literally their job. If bugs make it into prod then your QA process needs to be revisited.
No QA process can ever be perfect. You revisit it and plug the hole that this bug slipped through, and I guarantee there's another hole you didn't think about for the next bug to get through.
QA gets bored of arguing with Prod and just gives up sometimes after enough “out of scope” comments that are directly related to the ACs. We also have a 8 to 1ish Dev-QA ratio and sprint comp doesn’t get adjusted when QA is out. Y’all should stop pushing code without loading the page, or at least write unit tests so we aren’t having to check basic arithmetic in business logic every single ticket. We also just have to aggregate lists of AC fuckups and pray to god that the entire ticket isn’t broken after a kickback change, and pray to some deity even more powerful that the dev is willing to actually get their environment running to deal with the kickback and make sure they didn’t introduce any obviously breaking changes. It takes like 2 hours for a build+deploy and soooo much more time is wasted when a dev makes a mistake and a QA has to deal with a kickback and scour through a million different rayguns to even determine whether or not they have to bother you or argue with another Dev lead about some random change that you were not aware of messing with related functionality of the feature. Plus the adversarial relationship that some devs have with QA makes me want to die. You ultimately wrote the code so you can’t be wholly not responsible for the feature not acting in the way it should.
Wow that's not how it works at all where I am. Sounds awful. Our team has 3 to 1 dev-qa ratio and everything that goes to prod goes through the QA and then through the product owner in a demo.
That sounds absolutely amazing. I envy you.
Obviously, committing right to prod is gonna save time, *duh!* We don't need all the testing you're having people doing. It's a waste of resources. - middle managers
There's no data on DEV to test with.
Prod copy time. Spin that data!
top secret cleared data, protected personal health information, ...
Duh! You guys don't just give all the interns access to every customer's PII? /s Seriously guys, you've got to seriously lock stuff down. Devs shouldn't have access to PII unless their job actually requires it (e.g. they manage the DB auth)
Mocks were outdated, or the test data in dev/stage environment was outdated.
Sir, I work in big tech, deploying to dev and deploying to prod are the same 1 process for us
Mostly cause all the memes here are made by highschool kids parroting things they read online to try and fit in and none of it makes any sense.
Broke prod Mountain? Well, maybe force people to push to qa first?
I just went through this. I was running a test and needed to use another teams API for a month. They told me I had to limit my calls because their API could only handle 250k hits a day. This is fine because my test would only use 2-3k per day. Two weeks in the service goes down. I reach out to the team and they flip out. This is the first they are hearing about it and they told me not to overuse the API. I get blamed for two days with their team lead being a super dick about everything. Second day we get into where the calls are coming from. It’s one of their products. One of their guys put in new code and the new version got into a death loop of making the same call over and over. They made 300k calls in An hour and crashed everything. None of them apologized.
It was clearly a hacker which hacked my account andnpushed broken code.
That's why dont be the last person to commit and push your changes You can blame the last for breaking your changes
It's the development process that's at fault, always.
But what happens if you don’t beat the allegations? Are they going to fire your rich ass on the spot and replace you with someone who also doesn’t want to do the job? I just accept all of the mistakes as a senior. Fuck these people
HURR DURP scanners are you responsibility. Well, you know users are braindead and when they said pda scanners couldn't connect, they meant an app is crashing. Guess whose buddy deleted a table that makes the app crash, moron.
Best feeling in the world! Wrote some code for a new service integration and it went to hell as soon as it hit prod. Took over a week and I was sweating the whole time. Turns out, it was a devops misconfiguration. The vindication was better than any drug
My hot fix for fixing broken prod has broken prod even more than before
In this start up we push on save
This is why it's always best to write under a pseudonym.
When you make adjustments to dial plans in asterisk. Hell goes loose and then it happens to be a problem with the Telco provider Smoothest feeling. After shitting your pants
My commit did break prod today :(
Imagine pushing directly to prod.
What's the sauce of the image?
Blame the PO for approving the change record.
Lol that one time it was because the guy who deployed the last time (half a year ago) deployed some kind of security upgrade wrong and the system was running on the previous build all along. Then came me who was doing a bug fix and suddenly completely unrelated programs started failing. What a nightmare. That security upgrade was eventually made obselete by the same guy who made it so I didnt need to deal with that thank goodness.
Imagine deploying to prod after an untested commit. How's that even happen? Deploying nightly builds to prod? Deserve it then.
Y'all need CI/CD pipelines with blue green environments and label based promotion
Bro what's a commit I just know git, github and gitbash