I searched for myself on Github … here’s the link Continue reading “We’re a Little Too Open Source — Public Service Announcement”
I’m going to keep this one short… mostly because it’s almost 2am and I’d tweet it, but this is longer than 128 characters. Continue reading “Why I Love Docker”
Above is a video from webpagetest … Notice at the 2.0s mark … they are completely different. Nothing changed. There is no api to add in network effects, nothing but a every slight difference to the render order. The left becomes visually complete at 2.0s, the right, 2.3s.
If they were playing darts, they just got better at hitting the wall.
I have a 15% difference … but really, who cares. 0.3s isn’t perceptually different or worse. The question then becomes, what is the margin of error? Should it be based on human perception or statistically based on the test and time of day?
Webpage test helps you figure this out, sorta. They give you a range over a period of tests, but you still have to do the math AND know what math to do. Most people look at a histogram of data and still don’t know what kind of impact the 95ᴛʜ percentile has to their performance.
So, lets do some math…
So, if I’m looking at webpage test, I see that over 9 tests I ended up with a time to fully rendered between 1.7s and 2.1s … according to webpage test, I have a standard deviation of 0.2s and the mean is at 1.9s. This means that the 2.0s test above was within one standard deviation and the second test was less or equal to two standard deviations above the mean. In my humble opionion, two standard deviations is a pretty simple measure of “who gives a damn.”
This means anywhere from 1.5s to 2.3s I really can’t say that it’s anything special. If I were taking measurements of a change, I’d want to verify that the change’s mean is at least two standard deviations of the old mean as well.
Here’s an example of a popular website:
Normal load: 7.4s to 11.4s (range of 4.0s)
Improvement over time: 6.3s to 10.7s (range of 4.4s)
Did it get better? Not in my opinion. If they were playing darts, they just got better at hitting the wall. That’s an overlap of ~60%. Only ~40% of random page loads will randomly see any improvement. If improvement is based on random chance, then there’s no real improvement.
My blog isn’t particularly popular (or unpopular). I don’t really care too much about advertising it’s existence… that’s what google is for. Continue reading “My Blog: Pure WordPress vs. Static Files With Hugo and WordPress Hybrid”
Table of Contents
- Developing with complex filesystem layouts
- Save yourself some bandwidth with a docker hub mirror
- Getting localhost to make sense
- Deploying swap like it’s an app
- Building super complex images
- Cleaning up images, volumes, and networks
I’m back. The last couple of months have been hectic, from a personal standpoint. I walked away from my rancher instance, and dealt with what I had to deal with. Continue reading “Back Up And Running”
In case you are not aware, I write software for a company called BoomTown. It’s a not-so-startup in Charleston, SC. We offer a CRM for real-estate agents and manage their front-end websites. I’m on the front-end/consumer facing side of the house, and if you were to put all of our domains into one domain then, we’d be one of the most visited sites on the internet. If you’ve ever searched for a house in the USA or Canada, you’ve probably visited one of our sites before.
We handle a ton of traffic… we use Docker daily. In fact, Docker gets our new devs up and running in less than 30 minutes which is saying a lot. We use WordPress, React, Backbone, C# and Scala. It’s a ton of fun (I’m mostly massaging WordPress, React and Backbone … but my love language is C#)
In order to package up all our assets for production, we use a tool called WebPack. WebPack does some really neat things under the hood, including a thing called ‘code splitting’ aka, “layers”, “rollups”, or “fragments” if you’re coming from similar tools.
This allows us to defer downloading assets that are unlikely to be used immediately (or at all in the case of a desktop user who doesn’t need an off-canvas menu). However, a few weeks ago, we asked ourselves: “What is the cost to our users by doing this? When should we be doing it, and when should we not?”
I spent most of my day today answering those exact same questions. To get started, I need to get some facts. What does a split point cost, especially under bad network conditions? So, I drummed up a little calculator. Given the original size (without splitting), the size of the same file after splitting, and the size of the split out modules, what is the impact on site performance?
A real life example: An original size of 1,126kb, it takes around 26s to download over a bad connection. That’s a long time… So, what if we split out something that’s not needed right away? Say, an off-canvas menu. That will bring our initial load down to 1,095kb and 26s to download. The size of the chunk that got broken off is 27kb (30kb got cut out of the original size) which takes 800ms +/- 100ms. That seems like a pretty good tradeoff. Webpack starts downloading it as soon as it is able to and we don’t have too much to lose there.
Now, let’s try splitting out something smaller. This time we break off a 14kb chunk, which takes about 600ms +/- 100ms and saves us no time on the main chunk.
What about if we break off something larger? Say 63kb, which takes about 1.5s to download and shaves off about the same from the main chunk.
The interesting thing about this: the smaller the chunk, the more the user has to pay for it. They have to wait the same amount of time to download the main chunk, and then wait more time for the other chunks to download, taking up valuable download threads.
When you actually dig into the content of the chunks, you also realize that there’s a lot of duplication. For example, we have a lot of common React components but aren’t common enough to be in their own chunk.
As soon as they were appropriately configured, we went from 4 chunks to 2 chunks. Our main chunk dropped in size, and our other chunks were large enough to not hold up download threads.
So, what is the magic number that things make sense to split into different chunks? Your requirements may say something different, but it’s probably in the ballpark of 40-60kb.
I get contacted a lot with job offers. Most of them want me to work “for no money” and accept payment with equity. Continue reading “Payment with Equity”
I’ve been writing software nearly every day for just over 20 years now; whether it was a personal project, school project, contract job, or salary. It’s been fun, to put it mildly. I’m glad the 10,000 hour rule was tossed out, because I’m far from a world class expert. Continue reading “Change of Schedule”