Not that I seek it out, but whenever I come across a Kent Beck post, I feel exactly the same. Authoritatively Pointing out the obvious with metaphors while having zero practical advice.
I don't understand how this sentiment is weird at all and I think you're reading a lot of your own biases into it. The statement talks about where the work gets done and doesn't speak to how efficiently the work gets done.
There's nothing that prevents remote work from not being efficient enough to further the goals of an organization. It also doesn't prevent one from being a leader and teaching younger folk. In fact, I've had to do this a lot and it's much nicer to get people on a video call and share my screen to teach or do team programming than it is to find a meeting/conference room in the office.
Some people don't like interruptions _precisely_ because it prevents from doing an efficient job. And it's more than slightly annoying when you're getting interrupted multiple times per hour.
Yes, _very_ few things are built solely by an individual, but _nothing_ prevents remote work from "Doing work" as you put it or collaboration between teams.
I worked for ~2 years as a contractor for a government entity in Canada. ~3500 headcount of employees. What I observed there was sickening. This was a place that had white-collar unions for all non-management employees. This union had completely hijacked the mission of this institution. It was no longer about serving the people that this entity was created to serve, but rather to protect the union and its contributors.
The software we were in charge of writing had direct, material impact on the physical and mental well beings of people in the province. Life and death. And at times I saw things like a deployment of features being delayed by weeks/months because a union member who was responsible for _manually_ deploying the changes was on vacation. To automate that deployment meant automating a union employees job and was impossible. These features directly served the needs of people that were in critical need of them.
On the other hand, I have family friends who work for UPS and other delivery services and see the brutal toll it takes on their body and mind. Pushed to absolute limits and exploited because they don't have a union.
But to me, it seems unions can and often do exploit people. After witnessing all of this I've developed a very dim view of humanity. We all just want to exploit someone.
It's hard to even have a reasonable discussion about unions with most people, because (it seems like) the vast majority of people either fall into
1. Unions are great and everyone should be part of one. Anyone who points out the bad parts of unions is an evil conservative and hates the middle/lower class.
2. Unions are evil and cause nothing but problems. Anyone who supports unions is a fool who doesn't see the horrors they cause.
Any discussion that includes the viewpoint that unions have both good points and bad points (as you allude to above) has a high likelihood of being attacked by _both_ sides.
It also doesn't help that it appears to very hard to actually set things up so that you get the good sides of a union without also getting the bad sides.
some European countries, the Netherlands, Germany and France for sure, have not two but three pillars of employee company relationship: unions, works council and individual contracts.
the works council (aka 'company codetermination') involves employee elected colleagues in company leadership decisions like during, hiring, reorgs, raise distribution, working hours, shift planning guidelines,
... while the unions focus on salary and compensation negotiations
So long as the SEIU is around, I'll probably be in the second bucket. I get the theory of why unions can be beneficial, but the reality of the way they are structured in the US is fundamentally flawed.
Frankly, anyone who raids PCA funds the way they do is evil, plain and simple.
This mirrors my experience with unions pretty closely. I would much prefer that the sort of workers rights unions provide were instead at a government policy level.
That being said, there will always be some need for collective bargaining. I only wish it were possible to limit these extreme union grabs for power.
In the United States, our ports have the same problem. They run almost entirely by unions and do not want any kind of automation whatsoever. Pors another major countries have automated a lot of the moving of cargo, but unions here have fought it and prevented it.
> It was no longer about serving the people that this entity was created to serve, but rather to protect the union and its contributors.
This is exactly why Milton Friedman was against unions. Unions would eventually evolve into this amalgamation that only protects the union and its contributors while disallowing other qualified workers from getting jobs at the respective companies.
This is not how unions work in most of Europe. Except perhaps France where the unions have grown a bit much and strikes are too common. In the other countries I've seen they established a healthy and stable counterbalance. So I would argue it's not inevitable.
This is why we need more PPP (private-public-partnerships) here.
Just look at the MTO. It use to be an absolute cesspool or inefficiency which would make the old USSR envious.
Drivers license expired? Time to go stand in line where there are two tellers and 6 people "on break" in the back. The lineup is out the door, but who cares? Certainly not the union slugs who work there.
They finally introduced kiosks, which charged a "convenience fee" (probably at the request of the unions).
Fast forward, there is now a PPP in place for the MTO. Automation all over the place, you can renew your license online (FINALLY).
if not for the ability to step away from their union staff, none of this would have taken place. The union fights changes which leads to efficiency as this often means job losses.
Any "tech" the government builds looks like it was done by high school students and it is a shame they pay for this garbage.
Again, imagine you went to your bank site, and it looked like it was built 10 years ago and never been updated?
This is absolutely not how unions work in (most of) Europe. They don't get involved in day to day business. They just negotiate working conditions, pay scales etc.
I've worked at a company where Agile/Scrum was implemented in beautifully and we never thought to change the process.
However, I've worked more often at companies where Agile was either misunderstood, or inserted into an already flawed/toxic process of development as a magic bullet. I think there are enough places out there doing the latter that warrants discussion of this post and how we can improve things.
Totally fair, and agreed on the value of such a discussion; Same deal - seen/lived both. It requires fundamental cultural change (PM plays a big part). When the discussion starts with teams engaging in poorly defined scrum, I usually reference: https://www.scrum.org/ScrumBut.
I think there are enough employers out there that are implementing the version of Agile/Scrum described in OP's post. I currently work at such company and can relate with pretty much everything on that post.
The sad part is I'm not really sure what is better that will be accepted by clients who now have become comfortable with Agile. From my personal experience, if you're on a team with smart motivated people, Agile won't really help you or increase your output. You will continue to deliver the quality work regardless of story points, scrums, and all the other metrics.
If you're not on such a team, clients using Agile can now point to some number and say "Hey, this number is 100 but you should be at 180. You are not performing". I do agree that it gives too many false positives and generally my attitude towards them (in cases where I have been the target) is to simply reply "If you don't think I'm performing: fire me."
They usually walk away at that point mumbling something and I go back to work.
What if I want to test some part of the function in isolation?
At my current job I have to maintain a huge and old ASP.NET project that is full of these "god-functions".
They're written in the style that Carmack describes, and I have methods that span more than 1k lines of code.
Instead of breaking the function down to many smaller functions, they instead chose this inline approach
and actually now we are at the point where we have battle-tested logic scattered across all of these huge functions
but we need to use bits and pieces of them in the development of the new product.
Now I have to spend days and possibly weeks refactoring dozens of functions and breaking them apart in to managable services
so we can not only use them, but also extend and test them.
I'm afraid what Carmack was talking about was meant to be taken with a grain of salt and not applied as a "General Rule"
but people will anyway after reading it.
Perhaps it suggests our way of testing needs to change? A while back I wrote a post describing some experiences using white-box rather than black-box testing: http://web.archive.org/web/20140404001537/http://akkartik.na... [1]. Rather than call a function with some inputs and check the output, I'd call a function and check the log it emitted. The advantage I discovered was that it let me write fine-grained unit tests without having to make lots of different fine-grained function calls in my tests (they could all call the same top-level function), making the code easier to radically refactor. No need to change a bunch of tests every time I modify a function's signature.
This approach of raising the bar for introducing functions might do well with my "trace tests". I'm going to try it.
[1] Sorry, I've temporarily turned off my site while we wait for clarity on shellsock.
Something to consider, and this is only coming off the top of my head, is introducing test points that hook into a singleton.
You're getting more coupling to a codebase-wide object then, which goes against some principles, but it allows testing by doing things like
function awesomeStuff(almostAwesome) {
MoreAwesomeT f1(somethingAlmostAwesome) {
TestSingleton.emit(somethingAlmostAwesome);
var thing = makeMoreAwesome(somethingAlmostAwesome)
// makeMoreAwesome is actually 13 lines of code,
// not a single function
TestSingleton.emit(thing);
return thing;
};
AwesomeResult f2(almostAwesomeThing) {
TestSingleton.emit(almostAwesomeThing);
var at = makeAwesome(awesomeThing);
// this is another 8 lines of code.
// It takes 21 lines of code to make somethin
// almostAwesome into something Awesome,
//and another 4 lines to test it.
// then some tests in a testing framework
// to verify that the emissions are what we expect.
TestSingleton.emit(at);
return at;
}
return f2(f1(almostAwesome));
}
in production, you could drop testsingleton. In dev, have it test everything as a unit test. In QA, have it log everything. Everything outside of TestSingleton could be mocked and stubbed in the same way, providing control over the boundaries of the unit in the same way we're using now.
I've had to change an implementation that was tested with the moral equivalent to log statements, and it was pretty miserable. The tests were strongly tied to implementation details. When I preserved the real semantics of the function as far as the outside system cared, the tests broke and it was hard to understand why. Obviously when you break a test you really need to be sure that the test was kind of wrong and this was pretty burdensome.
I tried to address that in the post, but it's easy to miss and not very clear:
"..trace tests should verify domain-specific knowledge rather than implementation details.."
--
More generally, I would argue that there's always a tension in designing tests, you have to make them brittle to something. When we write lots of unit tests they're brittle to the precise function boundaries we happen to decompose the program into. As a result we tend to not move the boundaries around too much once our programs are written, rationalizing that they're not implementation details. My goal was explicitly to make it easy to reorganize the code, because in my experience no large codebase has ever gotten the boundaries right on the first try.
I've dealt with similar situations, and it was what led to me to favor the many-small-functions myself. I like this article because by going into the details that convinced him, Jon Carmack explains when to take his advice, not just what to take his advice on.
I think maybe the answer is that you want to do the development all piecemeal, so you can test each individual bit in isolation, and /then/ inline everything...
I'm not sure. If you then go head and inline the code after, your unit tests will be worthless.
I mean it could work if you are writing a product that will be delivered and never need to be modified significantly again (how often does that happen?).
Then one of us has to go and undo the in-lining and reproduce the work :)
I think I'm going to say that, if it's appropriately and rigorously tested during development... testing the god-functionality of it should be OK.
Current experience indicates however that such end-product testing gives you no real advantage to finding out where the problem is occurring, since yeah, you can only test the whole thing at once.
But the sort-of shape in my head is that the god-function is only hard to test (after development) if it is insufficiently functional; aka, if there's too much state manipulation inside of it.
Edit: Ah, hmm, I think my statements are still useful, but yeah, they really don't help with the problem of TDD / subsequent development.
Current experience indicates however that such end-product testing gives you no real advantage to finding out where the problem is occurring, since yeah, you can only test the whole thing at once.
I’m not so sure. I’ve worked on some projects with that kind of test strategy and been impressed by how well it can work in practice.
This is partly because introducing a bug rarely breaks only one test. Usually, it breaks a set of related tests all at once, and often you can quickly identify the common factor.
The results don’t conveniently point you to the exact function that is broken, which is a disadvantage over lower level unit tests. However, I found that was not as significant a problem in reality as it might appear to be, for two reasons.
Firstly, the next thing you’re going to do is probably to use source control to check what changed recently in the area you’ve identified. Surprisingly often that immediately reveals the exact code that introduced a regression.
Secondly, but not unrelated in practice, high level functional testing doesn’t require you to adapt your coding style to accommodate testing as much as low level unit testing does. When your code is organised around doing its job and you aren’t forced to keep everything very loosely coupled just to support testing, it can be easier to walk through it (possibly in a debugger running a test that you know fails) to explore the problem.
If it's done strictly in the style that I've shown above then refactoring the blocks into separate functions should be a matter of "cut, paste, add function boilerplate". The only tricky part is reconstructing the function parameters. That's one of the reasons I like this style. The inline blocks often do get factored out later. So, setting them up to be easy to extract is a guilt-free way of putting off extracting them until it really is clearly necessary.
But, it sounds like what you are dealing with is not inline blocks of separable functionality. Sounds like a bunch of good-old, giant, messy functions.
I think the claim is that if you don't start out writing the functions you don't start out writing the tests, and so your tests are doomed to fall behind right from the outset.
I'm not fanatical about TDD, but in my experience the trajectory of a design changes hugely based on whether or not it had tests from the start.
(I loved your comment above. Just adding some food for my own thought.)
"I'm not fanatical about TDD, but in my experience the trajectory of a design changes hugely based on whether or not it had tests from the start."
I'm still not sold on the benefits of fine grained unit tests as compared to having more, and better, functional tests.
If the OPs 1k+ methods had a few hundred functional tests then it should be a fairly simple matter to re-factor.
In "the old days" when I wrote code from a functional spec the spec had a list of functional tests. It was usually pretty straightforward to take that list and automate it.
Yeah, that's fair. The benefits of unit tests for me were always that they forced me to decompose the problem into testable/side-effect-free functions. But this thread is about questioning the value of that in the first place.
Just so long as the outer function is testable and side-effect-free.
Say you have a system with components A and B. Functional tests let you have confidence that A works fine with B. The day you need to ensure A works with C, this confidence flies out of the window, because it's perfectly possible that functional tests pass solely because of a bug in B. It's not such a big issue if the surface of A and C is small, but writing comprehensive functional tests for a large, complex system can be daunting.
The intro to the post has Carmack saying he now prefers to write code in a more functional style. That's exactly the side-effect-free paradigm you're looking for.
Even most of the older post is focused on side-effecting functions. His main concern with the previous approach is that functions relied on outside-the-function context (global or quasi-global state is extremely common in game engines), and a huge source of bugs was that they would be called in a slightly different context than they expected. When functions depend so brittly on reading or even mutating outside state, I can see the advantage to the inline approach, where it's very carefully tracked what is done in which sequence, and what data it reads/changes, instead of allowing any of that to happen in a more "hidden" way in nests of function-call chains. If a function is pure, on the other hand, this kind of thing isn't a concern.
> They're written in the style that Carmack describes, and I have methods that span more than 1k lines of code.
I don't think that's the kind of "inlining" being discussed -- to me that's the sign of a program that was transferred from BASIC or COBOL into a more modern language, but without any refactoring or even a grasp if its operation.
I think the similarity between inlining for speed, and inlining to avoid thinking very hard, is more a qualititive than a quantitative distinction.
Sure, before I knew how to write maintainable code. Before I cared to understand my own code months later.
My first best-seller was Apple Writer (1979) (http://en.wikipedia.org/wiki/Apple_Writer), written in assembly language. Even then I tried to create structure and namespaces where none existed, with modest success.
Maybe you should just be testing the 1k functions, if that even, and not the individual steps they take. The usefulness of testing decreases with the size of the part being tested, because errors propagate. An error in add() is going to affect the overall results, so testing add() is redundant with testing the overall results and you are just doing busywork making tests for it.
I also want to echo this sentiment. It's sad how many people here have responded with disgust and have wished bad things upon you. I come to HN to read discussions of a higher level, and instead of proposing level-headed solutions as to how to improve the app-store, you are treated with insults, and that's really sad.
As a developer you are already at the total mercy of these app stores. That doesn't seem to be a problem apparently for many people here because of the belief that the pride of the work should be completely secondary to monetary gain.
The owners of these app stores don't seem to care much either. The more people who make apps regardless of revenue expectations, the more they improve the quality of these app stores and add value to the owners, not the developers. The Apples, and Googles don't have any incentive to make the marketplace any more fair it would seem.
I can't believe someone who has 'delivered highly scalable solutions' actually managed to write this line on his blog with a straight face. How were you not able to deduce that the devs likely detected disabled javascript without the use of javascript?
His whole post is devoid of content and nothing but statements without any real substance or evidence.
The point of my article was not on how the dev was detecting JS. So I didn't want to go into detail on it. Even if he did that (which he never claims he does) He would only see that people are coming in with JS turned off, not that they came from facebook.
He does make the claim that 80% the clicks they were paying for had JS disabled. That would imply the referrers were set on those requests to be from Facebook and the IPs hitting the pages weren't registering in his JS based analytics package. We know he's logging the hits to a file, so presumably that data is there.
You claimed 'There were a few false assumptions made in the post. The first is that the traffic coming in had JavaScript disabled.' Care to elaborate on how it's a false assumption if the implied statement above is true?
I'll give you that he may be wrong, but I really don't see where there's concrete evidence to support your claim he's making false assumptions!
Or. Hits to a website result in a list of unique IPs accessing the site. Remove any IPs from the list which are known to be running JavaScript. The remaining IPs aren't running JavaScript or are ignoring/breaking your script.
This begs the question whether or not there was a scenario where there was a hit to the site, the referrer was set to Facebook, JS was enabled in the browser, but the resulting hit didn't result in a JS enabled request to the analytics software.
Another poster mentioned prefetching as a likely culprit. Google is known to prefetch search results for you, but it requires the use of the rel="prerender" tag. I find it highly unlikely that is what's happening here.