First, you have to choose a compiler. DMD is most typical, although GDC (based on GCC) and LDC (based on LLVM) offer improved performance and more architectures if you need them. We’ll consider DMD. You will also want to install DUB, Dlang’s package and build management system.
Typically, you can choose between downloading binaries, setting up through a script or using your package manager. The package is actually the best option as it includes full binaries for editors and adds executables to PATH rather than setup activation scripts. It also makes package management easier.
sudo wget https://netcologne.dl.sourceforge.net/project/d-apt/files/d-apt.list -O /etc/apt/sources.list.d/d-apt.listsudo apt-get update --allow-insecure-repositoriessudo apt-get -y --allow-unauthenticated install --reinstall d-apt-keyringsudo apt-get update && sudo apt-get install dmd-compiler dub
After that, dmd --version
and dub --version
should work fine.
There are actually many ways to develop with Dlang but this will focus on the more popular approaches on Linux: Visual Studio Code and IntelliJ. Of those two I recommend IntelliJ, although neither option is exactly good.
/usr/bin/dmd
if you installed the apt package) and DUB, which it should be able to auto-detect.That’s it, now build the module and the plugin should automatically create run configurations to compile, run DUB and run D app. Note that executing one of the run targets in debug will prompt to configure the path to GDB. This may be the occasion to provide other paths such as DScanner, DBD, DFormat or DFix, depending how well you want to set up your workspace.
.d
file will automatically install serve-d, the companion language server. This should provide rudimentary autocompletion, linting and building facilities. The extensions docs should open automatically as well for more details. To see it do Ctrl-Shift-P -> code-d: Open User Guide / Documentation
.Ctrl-Shift-P -> code-d: Create new Project
.You can then do Ctrl-Shift-B to build or run your application, this runs dub build
and dub run
respectively. You can also run ./dlang
to run the produced binary.
To debug using the CodeLLDB extension, you can use my launch.json and tasks.json.
Adding dependencies is very simple. For example to add vibe-d, a popular web, async and concurrency toolkit:
dub add vibe-d
which adds the dependency to dub.json
.
Basic unit testing can be achieved by adding functions with the unittest headers to any .d
file, e.g
unittest{ assert(1 + 1 == 2);}
The run dub test
. But you get no control whatsoever beyond success/failure. For more, packages like unit-threaded or dunit offer more advanced unit testing.
Visual Studio Code is the only officially supported “IDE” for developing C# on Linux. Get it here https://code.visualstudio.com/docs/setup/linux
Install the C# extension from Microsoft https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp
Install .NET Core https://docs.microsoft.com/en-us/dotnet/core/install/linux-ubuntu
Either use Snap, but note that it will not be confined
sudo snap install --classic dotnet-sdk
Or install through apt using Microsoft’s repo:
wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb; \sudo dpkg -i packages-microsoft-prod.deb; \sudo apt-get update; \sudo apt-get install -y apt-transport-https && \sudo apt-get update && \sudo apt-get install -y dotnet-sdk-5.0
dotnet --version
should print correctly, and that’s it. The C# extension will call dotnet from PATH.
Create a new solution:
dotnet new sln -o MySolution
In the folder that was created, create a new CLI project (run in a blank folder):
dotnet new console -o MyProject #or (dotnet new classlib for a library project)dotnet builddotnet run
To debug through VS code, create a launch.json automatically and run. This builds the project using dotnet build
(with whatever is in your .csproj
) and runs with debug. To disable the lengthy module load logs, add this to the launch configuration:
1 | "logging": { |
You can change the main class by adding this to the property group in the .csproj
under <PropertyGroup>
:
1 | <StartupObject>csharp.Day10</StartupObject> |
Adding dependencies is easy too, for example to add the YamlDotNet library through NuGet:
dotnet add package YamlDotNet
This simple adds a dependency to your .csproj
:
1 | <ItemGroup> |
To add unit tests, create an xunit
project under the same solution with tests in it, and add the original project as a dependency:
dotnet new xunit -o MyProject.Testsdotnet add ./MyProject.Tests/MyProject.Tests.csproj reference ./MyProject/MyProject.csprojdotnet sln add ./MyProject.Tests/MyProject.Tests.csproj
Unfortunately, the C# extension must be in the project folder to work well, so making a single .vscode folder for the entire solution is not a good idea.
Now to run it:
Debug test
above a unit test to launch it (or Debug All Tests
to run a unit test class).launch.json
but this is obviously not ideal.More on how to organize a .NET Core project and C# Unit tests. I ended up just using the “debug test” button.
My resulting project is here.
Visual Studio Code is the best option for a pseudo-IDE since IntelliJ AppCode is still limited ot Mac OSX only.
Installing Swift is easy enough: install the listed dependencies and download a release here, untar wherever you want and add it to PATH through /etc/environment
(or put it in /usr/bin
) and make sure swift --version
works. For reference I get 5.3.1 as of writing this.
In VSCode, install this extension (the only one with some level of maintaining)
For autocompletion, the Swift toolchain also comes with its language server under swift-5.3.1-RELEASE-ubuntu18.04/usr/bin/sourcekit-lsp
. In VSCode, set
1 | "sde.languageServerMode": "sourcekit-lsp", |
If you’re using the settings UI, these are under Extensions > Swift Development Environment Configuration
.
Note that while autocompletion is excellent, jumping to documentation or implementation is unavailable.
To set up debugging with LLDB, follow this article. It’s fairly limited however, as you currently cannot see variables, but breakpoints and step-by-step debugging work just fine.
SwiftLint is available here but difficult to setup on Linux.
Basic setup (longer version here):
1 | mkdir myproject |
My resulting project is here. This includes some unit tests so it makes a good starter project as well.
Now you can:
swift
in the terminal)I hope this helps!
]]>ls
, vi
, touch
, I could not find non-UTF8 files created on another server.This was a problem with the volume mount. On the server with working filenames, the mount was NFSv3, which does support other encodings. NFSv4, as used by the mount on the other server, only supports UTF-8 filenames.
One solution is to fix the mount to use NFSv3:
1 | sudo mount -t nfs -o vers=3 network-drive:/path/to/folder /mnt/nfsv4 |
This is not always available or transparent, unfortunately. A more correct but potentially problematic solution is to convert filenames to UTF-8 using the convmv
util, for example:
1 | convmv -f latin8 -t utf-8 -r /mnt/nfsv3 |
Lambdas allow you to write more intuitive assertions and assumptions (although old-style syntax is fully supported). You can also group assertions. This:
1 | Assume.assumeTrue(testMessage.getProtocol().equals("HTTP")); |
may become:
1 | Assumptions.assumingThat(testMessage.getProtocol().equals(("HTTP"), |
@Tag replaces @Category, saving the need to define interfaces:
1 | public interface OnlyRunWithIntegrationTests { |
becomes
1 | public class TestsJunit5 { |
Both can for example be used through command line parameters or IDE integration. This allows granular control in defining test groups, which can be useful in particular in integration test suites.
This annotation allows you to write logically grouped tests and define shared logic or attributes for these groups. For example:
1 | public class DBTests { |
There are other future-proofing reasons to use Junit 5 over 4, and projects can mix between them so it’s better to get used to the future testing platform.
With that said, the advantages of Junit 5 are minor unless you need them, which justifies its relatively slow adoption. But thanks to the retro-compatibility of Junit 5, a migration is not necessary or it can be done progressively and painlessly while bringing immediate benefits.
]]>XFCE appears to see a ghost display for some reason. In fact, as I took the above screenshot, I saw that it was 1748 pixels high instead of 1080, so the wallpaper actually starts normally at the top of the ghost display, it’s just cropped from that screenshot.
I have checked various possible reasons for that number but nothing made much sense, so I don’t know what the window system is thinking here.
For a quick fix, just make X redetect your displays using:
xrandr --auto
The panel should now be back at its intended place. For a more permanent solution check my article on making a script that runs at login.
]]>1 |
|
I wanted to one-off batch-process different data that I extracted from production DB to a file. I removed the @Parameters
annotation and a matching @RunWith(Parameters.class)
at the test suite caller, because I no longer wanted the test to be parameterized, then I got lazy and left the rest of the test (constructors, other tests, etc), like this:
1 | public class FibonacciTests { |
I ran the test and was greeted with the following failure:
run-test:[junit] Testsuite: demo.TestSuite[junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.077 sec[junit][junit] Testcase: initializationError took 0.014 sec[junit] Caused an ERROR[junit] Test class should have exactly one public zero-argument constructor[junit] java.lang.Exception: Test class should have exactly one public zero-argument constructor[junit] at java.lang.reflect.Constructor.newInstance(Constructor.java:423)[junit] at java.lang.reflect.Constructor.newInstance(Constructor.java:423)[junit][junit] Test demo.TestSuite FAILED
It took me a good 5 minutes to understand what was wrong. Removing the @Parameters
, along with @RunWith(Parameters.class)
, meant that JUnit no longer relied on the constructor with arguments, and instead tried to initialize the test class with a zero-argument constructor.
But as the Java documentation says, the Java compiler provides a default constructor to classes when no constructors exist. In my case, since the test class had just one constructor with arguments, and no explicit default constructor, there is no longer an implicit default constructor, so the class can no longer be constructed by JUnit.
In retrospect, this seems obvious. But that message was not clear to me.
So what’s the solution? Depends on the case. In mine, I only wanted to run the batch test so I had to remove the parameterized constructor, which meant I once again had a default constructor provided by the compiler. In another situation, I might have had to add my own explicit default constructor.
In a more general manner, check the existence of a parameterized constructors without an explicit default constructor.
]]>The specified database user/password combination is rejected: [28000][10100] [Database][JDBC](10100) Connection Refused: [Database][JDBC](11640) Required Connection Key(s): PWD; [Database][JDBC](11480) Optional Connection Key(s): AccessKeyID, AuthMech, AutoCreate, BlockingRowsMode, ClusterID, DbGroups, DisableIsValidQuery, DriverLogLevel, EndpointUrl, FilterLevel, IAMDuration, Language, loginTimeout, OpenSourceSubProtocolOverride, plugin_name, profile, Region, SecretAccessKey, selectorProvider, selectorProviderArg, SessionToken, socketTimeout, ssl, sslcert, sslfactory, sslkey, sslpassword, sslrootcert, SSLTruststore , SSLTrustStorePath, tcpKeepAlive, TCPKeepAliveMinutes, unknownLength
And then I enter my password and all is fine… until the next query. And this gets annoying very quickly because I keep having to copy/paste the password.
It turns out that the link between my Mac Keychain and IntelliJ was not working too well. I just had to head to Preferences and tell IntelliJ to save passwords using KeePass (which is a neat feature by the way) and everything works great again.
What’s the actual problem though? I don’t know. The keychain works well and IntelliJ is awesome. I do suspect the keychain to be guiltier here, as it tends to like nagging me for passwords. Better safe than sorry I suppose. If you know a likely cause, please send me an email.
]]>The question goes this way:
You have a list of licenses for a product. Find the earliest year which is not covered by an unexpired license.
Candidate: Is there a minimum of licenses? And isn’t MIN_INT always correct?
Interviewer: What do you think?
Candidate: Well, if there are no licenses, it is trivially possible to return MIN_INT. And I suppose you meant the earliest year past the earliest start date.
Interviewer: Right.
Candidate: How are dates represented? And are intervals inclusive?
Interviewer: What do you think?
Candidate: How about (start, end)
? And for simplicity let’s say intervals are indeed inclusive, so (2000,2000)
covers exactly one year.
Interviewer: Sounds good.
Candidate: How about an example? Say we have:
[(2011, 2015), (2021, 2022), (2014, 2018), (2030, 2035), (2019, 2019)]Result: 2020
Interviewer: Sounds good.
Go over each interval (start, end)
, and insert each of the end - start
elements between start
and end
in a list. Then, sort that list in increasing order. Finally, iterate that list and return the first number n
not preceded by n-1
. Complexity can be O(2^n * log(2^n)) = O(2^n * n)
to sort the resulting list in the worst case, as intervals get larger and sparser.
In a similar spirit to the naive solution but acting directly on the intervals, sort the list of intervals by increasing start
, then iterate over it. If at any point current.start > previous.end + 1
we may return previous.end + 1
as the solution.
A quick Java solution with test cases may look like this:
1 | public class MinNoOverlap{ |
Complexity is O(n*log(n)) (sort) + O(n) (iterations)
, so O(n*log(n))
.
If looking for an optimization, you can remark that, since we sort the intervals based on limited integer values, we can use base-N radix sort and decrease the complexity down to a minimum of O(n)
amortized linear time. In fact, your JVM’s implementation of Arrays.sort
is probably already doing just that.
For reference, this SO question talks about it.
The question goes this way:
You are a critical TV cable service, with various qualities and formats for different channels. These channels only run at certain times of the day. You need to talk to a PHY cable provider service to get a guarantee for sufficient bandwidth for your customers at all times. How would you determine the necessary bandwidth?
Candidate: Let me write an example to see if I have the requirements down. I will be organizing the data in a list of 3-tuples in the format (start_time, end_time, bandwidth)
, with bandwidth in, say, Mbps, and times in hours between 0 and 24. So an example would be:
[(2, 16, 10), (1, 5, 20), (10, 12, 25), (20, 22, 30)]
… where the first tuple indicates 10Mbps between 02:00AM and 4:00PM.
In this instance, we reach a peak of 10 + 25 = 35 Mbps
between 10AM and 12PM.
Interviewer: Sounds fine.
Let’s start at the most basic solution: Get an array of hours [0, ..., 24]
, and for each of them scan the intervals and get the total bandwidth being consumed at that time.
This doesn’t work for infinite time precision and is very inefficient for high precision. But from here we can guess better solutions.
In many intervals problems, we can think in terms of breaking down intervals into smaller ones. So we can start with one interval [(0, 24, 0)]
(0 Mbps between 00:00 and 23:59), then iterate over the input, integrating each new interval:
[(0, 24, 0)] -> (2, 16, 10) => [(0,2,0), (2,16,10), (16,24,0)]-> (1, 5, 20) => [(0,1,0), (1,2,20), (2,5,30), (5,16,10), (16,24,0))]-> (10, 12, 25) => [(0,1,0), (1,2,20), (2,5,30), (5,10,10), (10,12,35), (12,16,10), (16,24,0)]-> (20, 22, 30) => [(0,1,0), (1,2,20), (2,5,30), (5,10,10), (10,12,35), (12,16,10), (16,20,0), (20,22,30), (22,24,0)]
This is tricky to implement efficiently, however, because every time we integrate another interval from the input, we have to find which interval we should break into two or more. We can either order the input list, or binary search the output list for intervals to break, but for that effort, we can only get to O(n * log(n)), whether .
We can see that what we really need is some variable that iterates over the input and keeps track of the maximum concurrent bandwidth so far. In order to do that, it would be a good idea to process increases and decreases separately, and modify the current concurrent bandwidth accordingly.
So we will rearrange each interval (start_time, end_time, bandwidth)
into two events (start_time, bandwidth)
and (end_time, -1 * bandwidth)
. Then we will sort the events by increasing start_time
. Now we are ready to iterate over the input, keeping track of both the current and maximum values of the concurrent bandwidth.
A quick Java solution with test cases may look like this:
1 | public class MaxConcurrentSum { |
The complexity here is O(n * log(n))
due to the event sorting.
This Geeks for geeks article talks about this problem and proposes the use of an auxiliary array with a value for each number between the minimum and maximum times. This array stores the value of every event, which we then iterate with the current and max bandwidth variables. This replaces the need for sorting, but does not support high precision. So this is in principle O(n)
complexity, but potentially O(2^n)
for very large intervals.
Another solution could involve using an interval tree (a tree that holds intervals), which takes O(n * log(n))
to build, but only O(log(n))
to query for overlaps. This offers the same complexity but may provide shorter actual run time.
A variant of the previous problem, which I was also asked, goes this way:
You are the maintainer of a phone service. You have a log of calls including, for each call a tuple of
(start_time, end_time)
. How do you process this log to fetch a history of the number of concurrent calls?
The previous problem’s solution can also work here, but instead of updating the maximum number of overlaps, we write the current number every time it changes.
]]>.idea
folder but include its runConfigurations
subfolder so that your team can run your launchers.Now let’s take a nested example:
project+-- .gitignore+-- afolder| +-- afile.txt| +-- bfolder| | +-- bfile.txt| | +-- cfolder| | | +-- cfile.txt
Within the project
folder, we want to ignore all but afolder/bfolder/cfolder/
. So we expect afolder/afile.txt
and afolder/bfolder/bfile.txt
to be ignored.
Now you might expect gitignore to work this way:
1 | afolder/ #ignore afolder |
This doesn’t work and instead excludes all of afolder
. As noted in the commit of git introducing the !
pattern:
It is not possible to re-include a file if a parent directory of that file is excluded. (*)(*: unless certain conditions are met in git 2.8+, see below)Git doesn't list excluded directories for performance reasons, so any patterns on contained files have no effect, no matter where they are defined.
In 2016, there were two attempts to allow this kind of recursive un-exclusion, but they led to regressions and were reverted. There has been no progress since. So the way to do it is with this .gitignore
:
1 | afolder/* |
Note that, in order to exclude, we use folder/*
: this only excludes the contents of the folder but not the folder itself, allowing git to apply un-exclusion patterns (!
). If we write folder/
, we tell git to unconditionally ignore all of the folder.
Then we check that the result is achieved:
> git status -u --ignored=matchingOn branch masterNo commits yetChanges to be committed:(use "git rm --cached <file>..." to unstage) new file: .gitignoreUntracked files:(use "git add <file>..." to include in what will be committed) afolder/bfolder/cfolder/cfile.txtIgnored files:(use "git add -f <file>..." to include in what will be committed) afolder/afile.txt afolder/bfolder/bfile.txt
In a more general way, this repeating pattern of “excluding then un-excluding” is unavoidable as of git 2.21 (February 2019).
]]>If you already know all about preparing your portfolio, leetcoding, searching for positions, planning with a timeline, and so on, and all you want is to see examples of interviews, you can skip to the Interviewing section.
Disclaimer for candidates: A lot of this is written with the mindset of investing significant time and effort into getting the best position possible. If all you want is a 9-to-5 job that pays average salary until you retire, you can probably skip this write-up and already get accepted to positions under your actual potential based on your resume. Keep in mind, these are my experiences and takeaways. Many resources (books, online or others) will go more in-depth about these topics and may fit your mindset more than this article. If you want to know more about my own process, I will publish more info on those in a later article. For now, here is a Sankey graph of my interviews:
Disclaimer for employers: What I write here aims to level the field between candidates. This can only help to find excellent candidates who would otherwise fail due to lack of interview practice. And of course I do not divulge specific questions or reveal who asked what, because that would defeat the purpose. Of course, neither do I reveal anything under NDA.
The first thing to do is assess your situation:
Who you are: What you education and background is, how much experience, skills and awards you have, how many languages (programming or other) you speak, etc. This helps you figure out the kind of positions that may be available to you.
What you want: Ask yourself what you want to do. You may be very satisfied with your current career trajectory, you may want to move further, or maybe you would like to switch paths. For example, switch to full-stack development, get closer to DevOps, join open source, approach fintech, machine learning or robotics, or accelerate a transition to management.
Synthesis: Combine the first two points. For example, if you’re reaching for a senior engineer position, you will be limited in how much of a change you can make from your current experience without going back to junior. But taking a few steps back on the career ladder may be the best way to spend 2 years when you look back 10 years from now. More pragmatically, this will be very important when you start narrowing down open positions, because there are many.
The next piece of planning information you will need is a timeline. If you have already started searching, you still need to prioritize positions instead of jumping at whatever companies you like.
I recommend preparing three batches of positions (more about how to do that in the Searching section): one with jobs you don’t really want (mostly for interview practice), one with those you may accept with the right offer, and one with those you will probably accept unconditionally or more easily. You may very well miscalculate and receive surprisingly interesting offers from companies you didn’t think much of, so be prepared to negotiate with them for a deadline extension.
The batches help you prepare more efficiently and fail earlier. It’s also a good method to spread out the interviews over a palatable period: I really don’t recommend doing two onsites or more than four interviews over two days, you’ll burn out quickly.
In my case, I was due to leave my current position at a given date, no later or earlier. I wanted to start my next job up to a month afterwards. This kind of situation may also occur due to dissatisfaction from either side, money problems or other constraints. I do suggest taking this gradual approach even in other situations. Therefore, you have to know when to start the process and how you will get there. Here is what worked for me:
This leaves roughly 1.5 month to give a notice to your current employer, as well as some buffer for negotiations, to take a breath, or to go on a vacation to avoid burnout. You may also want to collect unemployment.
The first thing a recruiter does is look carefully at the candidate’s profile, especially if he comes without a referral. This includes looking at:
You may find out during interviews that you had a weak spot in your knowledge, and that’s why practicing interviews is good. You fail earlier and get to study the topic in depth before later interviews. For example, my concurrency knowledge was limited because, in my last job, I had mainly worked around synchronization problems using concurrent transactions, database locking and retrying. As I noticed the importance of the topic in coding interviews, I studied and improved, which really helped me later on.
The pattern with this kind of ad-hoc learning is to read about the basics, then look at the APIs of the main implementations, and finally know when to use which. These are some topics I wish I has studied earlier:
This is all specialized knowledge but it can bring you some points. Broader knowledge like design patterns, algorithms, data structures, operating systems is assumed to be much better.
The world of coding interviews has become a large market with many players offering many tools. I’ll just mention the most useful ones:
Now for the proper preparation. Leetcoding is a discipline of its own, and will not make it to your portfolio. It can make you a better programmer, but it’s mostly good for coding interviews. So this if where most of your effort should go. If you have already used Leetcode, you can skip this.
Problems are distributed into 3 categories: easy (good solution in 15 minutes), medium (reasonable solution in 30 minutes) and hard (brute force solution within an hour). These times are what your goal should be. The most important of them should be to get good at providing decent solutions for medium problems within 30 minutes, as this is what will be expected of you in most coding interviews. You should also eventually be able to explain some possible optimizations.
I recommend passing at least 100 problems on your own, each of them before looking at hints: 50 easy, 40 medium and 10 hard. As you can see, this amounts to about 50 hours of total work: on average half an hour a day during a 3.5-months searching period.
For each problem, before you start solving, read the description well, take your time to lay out a good algorithm, and think about test cases. This will help in real situations, as interviewers are happy to see a candidate talk about edge cases, but dissatisfied when they see you making mistakes, even if you fix them right away. It may sound unrelated to actual day-to-day coding, I’ve been rejected from one place for this sole reason. Besides, you’ll be asked about test cases anyway.
Finally, when the problem passes, go to the discussion tab and find the two best solutions: the most elegant (usually the most upvoted) and the fastest (titled something like “beats 100%”). If you see that you won’t pass after twice the time I recommended (30/45/60 minutes), you should go look at discussions anyway, and try to improve next time.
I’ve seen recommendations to go back to each problem you fail to pass after a week and solve it again. If this works for you, great. Otherwise, there are enough questions on Leetcode that I think it’s a better investment of your time to look at other problems that look at the same ones over and over. There are exceptions to this: the classic problems which you are expected to solve well no matter what (see the Coding interviews subsection).
Note that you don’t necessarily need to pay Leetcode’s premium fee (although it’s a relatively small investment):
The goal here is to have 30-40 positions ready to distribute over a period of time. Expect up to half of them to not respond or to reject you before screening for various internal reasons. That still amounts to a decent number of interviews since most companies will have you go through 2-3 separate ones.
The very best way to find positions is to ask yourself if you have friends, family or ex-coworkers that can introduce you. All companies, even the largest, will prefer by a large margin to bring in someone who has been vetted by a current employee rather than take a bet with online applications. This raises your chances by a few notches at every step of the way, from the filtering, through the screening, the onsite and even up to the final offer. You will be able to negotiate from a better position if you have a referral next to your profile. The referrer might also get a big bonus if you are hired, so don’t hesitate to ask.
The next best thing is to look for positions online. This allows you to focus on the best positions for you.
If you know good companies, go directly to their Careers site. Other than that, I have found LinkedIn and Indeed to be the best ways to find positions. Filter primarily for your field of expertise (e.g DevOps, Java, React, full-stack, etc) in order to get positions relevant to your seniority level. Continue making those search queries while you’re interviewing, as interesting positions come and go every week.
There are several kinds of online applications:
In any case, some places give you a way to send a cover letter. I recommend not bothering with it. It can do more damage than it can help, and it can consume a lot of time.
Some people report success with headhunters. They will look for positions according to your criteria, apply and reply on your behalf, schedule interviews and generally work as your career secretary for nothing on your side, though your mileage will vary. I prefer to rely on my writing and can report that it doesn’t really matter. So it’s up to personal taste.
When you find a position, you should ask yourself if it is what you want:
You can answer most of these by looking at the position’s description, as well as the company’s Glassdoor if it exists (look at the reviews, salaries and benefits pages), or otherwise by asking your referral. You will seldom be applying to no-name companies without a referral, so you’ll often have one of these two reliable sources.
The objective of this exercise is to put positions on a scale and applying to the top ones, rather than aggressively filtering them.
Each interview may require a bit of preparation:
In my experience, the interview process varies, but it is consistent by company size:
In any case, all follow up with the final HR interview that precedes the actual offer.
There are different types of interviews. I know some people who hate or even completely give up on some kind of interview. For example, because assignment tasks take too long, or because whiteboard interviews are not a good way to assess a programmer’s skills. While these may be true, career-advancing positions often have you go through the interviews you don’t like, so try to be comfortable with all the types.
I have identified 5 fundamental types of interviews:
Before we talk about coding interviews, if you hate them and don’t want to deal with them, look at this repo that lists companies that don’t do whiteboard interviews.
The coding interview often starts with a story semi-related to the company. The interviewer asks you how you would solve some small problem the company has.
You should absolutely present some possible inputs and outputs, first to see if you’ve got a good API for the requirements, and second to foresee the edge cases. Also clarify ahead of time whether you are expected to write in pseudocode or in some language, and if that language is okay.
Then you present a brute-force solution before thinking of possible optimizations. The interviewer either tells you to go ahead with it or asks some followup questions to make sure you’re going in the right direction.
Then you can get to coding. This is either done on the whiteboard or on some code pad. You rarely get to choose between the two. The whiteboard is more intimidating, but it takes longer to write, so you can think about the code while you’re writing it. Code pads usually have syntax highlighting but can usually not compile or run your code, and much less provide auto-completion. Leetcode is just like that, and that’s the reason why it’s the best way in my opinion to prepare for this kind of interview.
As you code, speak every second of the way, no matter what happens. Don’t bother writing comments unless you got a remark about some line. Write short words for variable names. Don’t write separate functions unless necessary or there are over 2 re-uses in your code. Occasionally, you can write pseudocode in long problems for obvious and repeating parts (e.g removing a node from a linked list).
Be ready to tell what the time and space complexity are when you’re done. You will be asked about possible optimizations but not asked to code them unless there’s a lot of time remaining.
If you stumble across a mechanism you didn’t know (as in my case on two interviews with Java spinlocks and ArrayBlockingQueues), you will have some difficulty with it. While that kind of interview doesn’t get you many points, it doesn’t necessarily damage your position if you manage to think about the topic logically and show understanding of related mechanisms.
I wanted to insert samples of coding problems here but this article is already long. In the next weeks, I will be posting more entries with detailed examples of coding problems I was asked through my interviews. I will keep an updated list of them here:
In the meantime, this reddit thread lists commonly encountered problems succinctly and accurately in my experience. For straight practice, this is also a short resource to problems to solve by priority. Here is a longer curated list of 100 Leetcode problems.
Assignment tasks are done autonomously over 1-3 hours depending on the nature of the task. It is sometimes composed of several Leetcode-style algorithmic problems done on some type of coding platform. Other times, you have to code a larger assignment in some IDE, in which case you will usually discuss the exact requirements. For example:
Ahead of time, you should attempt to clarify the kind of assignment you will face. Brush up on your build tools (e.g you may expect Gradle but get Maven) and study well any framework you don’t know but have to use.
Time may get short towards the end. Prioritize, and this includes letting go of the elegant way you may have mentioned you would do things, in order to accomplish all tasks. On the scoring scale, completing all items scores higher than code quality, which itself scores higher than the algorithms you used.
Further, you may receive feedback on your code, either in the form of a code review that follows the assignment, or later on during your next call. Be prepared to answer what strengths and weaknesses showed in the assignment, and how you could have improved your solution.
In this kind of interview, you either start with the interviewer asking you about a large project you recently completed on your own, or more usually you are given some problem of that type. For example, you have to approach the architecture of a message queue or the design of a system that matches taxis and travellers.
You speak in broad terms about many architectural concerns:
In interviews that are more focused on design, you may have to write pseudocode algorithms for some of the finer nuances. This typically involves concurrency, traversal heuristics, or data structures.
These interviews are meant to prod you on situational behavior for organizational fit. They are usually done by some kind of manager.
One important aspect of this type of interview is to come ready so you don’t leave too many voids while you think about a good case, and don’t backtrack if it ends up being a bad example.
Some classic questions include:
Many interviewers will instead try to come from a more objective direction by asking you about past situations. The cases are not important, and the interviewers say this, as they will readily switch to another question if you’re stuck (but try to not get to that point). Hence, as you think about the cases, you can tune your story a bit as long as your reaction was real and fit how you view yourself and where you want to take your career. Some questions include:
So work on getting answers for these. You will find that each of these probably fit other questions as well, so just keep that store of situations in mind. I will also repeat the CTCI book’s advice on the matter: take some time to think of 3-4 interesting projects you had a major part in. For each of these, think of answers to these 6 questions:
And think a bit about probable followup questions in each case.
Finally, I have encountered a few technological interviews. Some had taken a short part of the session, some had taken the entire time.
Here, the interviewer tries to see how curious and passionate you are about tech. You will discuss many technological topics including runtime internals, containers, message queues, noSQL, garbage collection methods, latest innovations, etc. These will be interlaced with actual questions like:
int f; printf("%d, f);
, this would involve compiler behavior and registers.And other more exotic ones of the type you will find on CodeKata. There isn’t much to do to prepare for this, but it helps to follow some kind of aggregator, tech channel or regular technical blog for this, as mentioned in the Resources subsection.
Take the above types with a grain of salt, as some interviews fall in several of the fundamental categories. In some cases, this is by design as some companies like to do several interviews that are half-behavioral and half-coding. In other cases, an interview can involve both a deep dive and code, or both situational and technological questions.
After passing each interview, it’s time to evaluate the position and decide whether to continue, unless you want more interviewing practice. You should also use the last minutes of each interview to ask questions about the position, as the interviewer will almost always ask you to do anyway. These questions are mostly laid out in the Profiling positions subsection. But with more knowledge gleaned through the interviews and through talking with HR, you can ask additional questions:
Some recruiters play the role of behavioral interviewers and start the negotiation interview with behavioral questions. Then they get to the proper negotiation part. They usually start by explaining the benefits of working at the company, for example vacation days, studying and travel and food allowances. Then you get questions questions like:
I have a hard time with negotiations. Like many programmers, I don’t like bargaining. In addition, we never know what the right answers are to maximize our benefits. Most sites tell you to refuse saying any number, and instead react when you are told one. But recruiters are insistent and I can only refuse so many times.
So I have adopted a simple strategy: say reasonable numbers, get as many offers as possible and then play the competition between them. It works well, with many companies knowing that they should straight-up make a high offer, even if you said a lower number.
Recruiters will frown when you tell them about the other offers but that’s expected, as a candidate with no other offers can be a small red flag. Other recruiters will explain that your reasonable numbers are too high, in which case you should think again whether it’s even worth applying if you’ve got other offers.
Another difficult part is to leave offers hanging to have a backup plan while you get more of them. You can explain that you’re not ready to take an offer yet. Later, you can take this kind of opportunity to get more benefits from previous offers when you get better ones. Do this as much as possible and ignore deadlines as most companies will happily take you with the same initial offer if you come back to them.
In any case, companies want you. They won’t readily give up on you after all the investment they made, so push a bit until you are most comfortable with one of the offers. Recruiters have ways to make benefits shine, so always stay pragmatic.
Finally, some time after the offer is made, you get the actual contract. It can vary in length but it often contains questionable clauses. In most cases, most of the contract is non-negotiable as your recruiter only gets to tweak numbers and not legalese. But you should raise questions anyway to clarify whether they are significant. When you get the contract, the terms is usually final so make sure you negotiate well before giving any answer.
When comparing offers, don’t compare a recruiter’s speech to a competing contract, as contracts as often scarier than real conditions, in particular with working hours, remote work or side projects. Get as much information as possible from all sides before taking a final decision.
Also, be prepared for a background check at large companies. Have PDFs ready to prove your experience and education to shorten the process.
That’s it for now. I hope this will help some readers with the interview process. If you have questions, send me an email. Good luck!
]]>I recently received a Dell Venue 8 Pro T01D as a special gift: “it doesn’t work, if you can fix it, it’s yours”. I had no idea what I had even received, apart from being an x86 tablet. But after charging it and turning it on, I saw the Windows 10 setup screen, albeit with a small problem: the touchscreen wasn’t responding.
With the help of a USB OTG cable as pictured below, along with a USB hub, I could connect a mouse and keyboard and get to the Windows desktop. That didn’t do much though, since the touchscreen still wasn’t working, and now I could see that neither was the WLAN connection.
I then found a Dell support thread recommending a BIOS update and firmware drivers. While the BIOS update installed fine, it didn’t help and the firmware drivers were not even detecting the devices, and neither were they displaying under the Device Management utility (Win+R, devmgmt.exe, look for “HID-compliant device” under “Human Interface Devices”).
Eventually, after trying other drivers, what turned out to help the most were the chipset drivers (these ones on my hardware). Seems like the CPU had a bad time communicating with the other components.
After doing that, I could install the networking and firmware drivers to restore WLAN and touchscreen functionalities.
In short, at the very least find an USB OTG cable and connect a mouse or keyboard to get to the desktop. Then transfer the chipset and WLAN driver, either through a thumb drive and USB hub to have the mouse transfer the files, or try using the microSD slot. Finally, once the Venue is connected to the Internet, you can download and install the rest of the drivers. Updating Window 10 then brings a lot of worthy features such as the swipe keyboard.
While I’m a fan of Linux, after playing with this tablet for a few days, I have to admit that Windows 10 is the best option right now for serious tablets. Even with its 2GB RAM, it works surprisingly well. You need to embrace the spyware and the bloatware of course, but it has its use cases.
]]>Then something bothered me yesterday as I was blinded by the brightness of opening a new tab on Firefox.So I’ve tried to get my Firefox 64.0 (Quantum) to display new tabs with a dark background.
Many superfluous extensions (e.g New Tab Override) try to do it just as well as userContent.css advices that are all over the Internet. Or even better, you can get the same resultjust by going to about:config
and editing browser.display.background_color
to an acceptable color.
But this all leaves a blinding white flash for half a second before Firefox loads the edited background color.
Eventually, I found a solution on a Reddit threadusing code from ShadowFox, which aims to be a dark retheme of Firefox, in particular this file.
To sum it all up, the complete process is this:
about:config
, change browser.display.background_color
to the color of your choice,about:profiles
under “Root Directory”,chrome
subfolder, open/create userChrome.css
and append the following CSS(replacing #1D1B19
by the color of your choice): 1 | #browser vbox#appcontent tabbrowser, #content, #tabbrowser-tabpanels, |
1 | history | awk '{CMD[$2]++;count++;}END { for (a in CMD)print CMD[a] " " CMD[a]/count*100 "% " a;}' | grep -v "./" | column -c3 -s " " -t | sort -nr | nl | head -n10 |
Here are the results on my personal machine:
1 957 17.918% sudo 2 602 11.2713% git 3 452 8.46283% cd 4 252 4.71822% hexo 5 215 4.02546% npm 6 123 2.30294% ll 7 121 2.26549% nano 8 95 1.77869% gcc 9 83 1.55402% make10 78 1.4604% ssh
A few observations:
nano
, but getting used to vim
is too much strain on me right now. It is absolutely at the top of my personal improvement to-do list to get used to a tmux+vim workflow, as well as switching from XFCE to i3wm for mouseless window management.cd
and ll
. I should get used to a command-line file manager, such as ranger
or nnn
.So I’m no example. But that’s a great way to put forward some of the bottlenecks in your workflow.
For the sake of sample size, I do recommend having a large history file size, e.g for bash
, put in your ~/.bashrc
:
1 | HISTSIZE=10000 #max number of commands to remember per ongoing session |
Or use negative values to set the size to infinite.
This is also useful for the underrated Cmd+R command lookup, for backing up your installed programs or for other history related needs. The storage cost is often negligible and it could save your day.
]]>TL;DR The ancestor image is FROM scratch
, a no-op command.
I have had many opportunities to introduce Docker to my coworkers in depth, although I had never received any sort of training. Rather I relied on Internet resources.
And so I have had some difficulty answering one question that each of my coworkers would inevitably ask as I explain the layout of a Docker image.
The way I do it is simply going over a Dockerfile, such as this one which is basically the Dockerfile
for Telegraf, the metrics agent:
1 | FROM alpine:3.6 |
As I introduce the FROM
operator, I mention the notion of base image, which is explained here. Then, the question occurs: “So what is the ancestor image? What happens if I check the parent image of the parent of the parent…” It obviously comes to a stop eventually.
Let’s take the Dockerfile
above as an example: it is built on top of Alpine Linux, a minimal Linux distro that is very popular for building containers that use little disk space and RAM. It ships with basic utilities like BusyBox for a mere 5MB volume.
So how is Alpine Linux built? It’s a bit complicated because of the automated versioning and build process, but it comes down to this: a root filesystem named rootfs.tar.xz
is put together and then the following Dockerfile builds the Alpine Linux image:
1 | FROM scratch |
The key is the first line: FROM scratch
. As of docker 1.5, FROM scratch
is a no-op command. It is designed for images that do not need an actual parent image. That could be either a user space like Debian or Alpine Linux that ships with interpreters like bash
and other utilities usually taken for granted, or just straight-up binaries to make the most minimal images possible.
Up until last week, I had no idea that my ISP’s DNS provider was slowing me down. Of course, in retrospect, it makes sense since my ISP is so lousy and slow. But consider this: slow DNS requests may represent hundreds of milliseconds of latency!
This is a significant addition when websites these days struggle to load under half a second because of all the scripts, requests between microservices and so on.
You may be using a DNS cache such as under Windows, or with systemd’s resolved caching feature. But cache misses are common, and I would wager that most page loads are the first time you visit a domain for the given day.
On top of performance, there are also a number of additional reasons to switch DNS providers:
So I knew my DNS provider was slow, but how much of a difference is it? A huge one! See this post for some metrics from all over the world.
To get concrete measurements, I will use this great and easy to use Linux script: https://github.com/cleanbrowsing/dnsperftest
1 | git clone --depth=1 https://github.com/cleanbrowsing/dnsperftest/ |
Results:
test1 test2 test3 test4 test5 test6 test7 test8 test9 test10 Average cloudflare 14 ms 14 ms 28 ms 14 ms 23 ms 24 ms 17 ms 73 ms 32 ms 14 ms 25.30neustar 72 ms 83 ms 73 ms 70 ms 80 ms 73 ms 69 ms 68 ms 73 ms 74 ms 73.50norton 77 ms 77 ms 70 ms 73 ms 69 ms 74 ms 73 ms 75 ms 85 ms 77 ms 75.00quad9 75 ms 86 ms 71 ms 72 ms 78 ms 78 ms 75 ms 72 ms 75 ms 71 ms 75.30google 77 ms 76 ms 67 ms 99 ms 81 ms 71 ms 65 ms 93 ms 75 ms 63 ms 76.70cleanbrowsing 77 ms 77 ms 75 ms 77 ms 72 ms 74 ms 82 ms 90 ms 81 ms 78 ms 78.30adguard 80 ms 90 ms 78 ms 75 ms 74 ms 77 ms 72 ms 76 ms 79 ms 91 ms 79.20level3 86 ms 86 ms 79 ms 81 ms 78 ms 84 ms 83 ms 88 ms 78 ms 88 ms 83.10opendns 78 ms 71 ms 81 ms 110 ms 75 ms 318 ms 76 ms 88 ms 74 ms 71 ms 104.20127.0.0.53 235 ms 235 ms 250 ms 217 ms 233 ms 358 ms 197 ms 322 ms 244 ms 261 ms 255.20yandex 102 ms 251 ms 111 ms 107 ms 103 ms 109 ms 323 ms 116 ms 267 ms 184 ms 167.30comodo 87 ms 194 ms 111 ms 94 ms 145 ms 89 ms 99 ms 96 ms 1000 ms 97 ms 201.20freenom 96 ms 323 ms 93 ms 96 ms 563 ms 401 ms 96 ms 472 ms 101 ms 271 ms 251.20
Results may vary a lot between locations so only you can tell which one is the fastest fro you. For me, it was Cloudflare with a huge difference of 229.1ms on average from my ISP, almost a 10x ratio!
Your router usually acts as your recursive DNS server, by giving its own address through DHCP. So, switching DNS servers is usually as easy as entering your router interface and changing a value.
For me, on my router’s web interface, I have to enter Basic Settings
, under which I found the Domain Name Server (DNS) Address
, which I just switched from “Get Automatically From ISP” to Cloudflare’s addresses 1.1.1.1
and 1.0.0.1
.
As it happens, Cloudflare’s DNS also seems to have gone to greater lengths to demonstrate their concern for privacy than any other public DNS provider, and they often score the best in terms of performance. On the other hand, Cloudflare as a company has a bad track record with privacy and can now correlate DNS requests with the information gathered by their CDN infrastructure, which is still a concern. OpenVPN may be more appropriate if so.
Ultimately, the only ideal course of action here is to use a VPN. Otherwise, I believe mindfully switching DNS resolvers is a good decision.
]]>This still held true after Java 11’s General Availability on Sep. 24 2018.
Last month, OpenJDK’s repository was finally updated with an up-to-date package for Ubuntu using their PPA:
1 | sudo add-apt-repository ppa:openjdk-r/ppa |
Check with java -version
(although java --version
should now work, if not you are on an earlier JDK).
If Java was manually set to another version, run:
1 | sudo update-alternatives --config java |
And either select OpenJDK 11 manually, or choose 0 for auto mode, i.e automatically update when (re)installing JDK packages.
]]>Our company is on an intranet, so this is after experience with CI platforms like Atlassian Bamboo and Bitbucket pipelines, as well as orchestration platforms like Docker Swarm and plain Kubernetes.
In my opinion, this is the best setup for private clouds, as it is mostly code as configuration and it’s all open-source. So after half a year of tweaking it, I wanted to share my Jenkinsfile and development workflow.
Another important point of this setup is that it allows me to deploy previous builds, which was an issue I have had since migrating away from Atlassian Bamboo, which had a very handy deployment feature despite its numerous flaws.
1 | //Jenkinsfile parameters |
The other crucial part of this setup is the collection of files that describe the deployment. They are basically typical deployment/service/route Openshift configuration files. As an indication, these are my files:
deployment.yaml:
1 | apiVersion: v1 |
service.yaml:
1 | apiVersion: v1 |
route.yaml:
1 | apiVersion: v1 |
Other files can be adjoined to these files, like ConfigMaps, secrets, nodeport services, etc… I have described some of these in previous articles.
The workflow enabled by this setup should be straightforward by now:
There is a lot of tweaking to be done before this setup can be functional, but when it runs it’s flawless.
You can also see these files on the Github repo. If you have any question or remark, leave a comment, or send me an email.
]]>As I once mentioned, I generally prefer scripted pipelines due to the flexibility of Groovy and the code-as-configuration aspect. However for Docker, I prefer using the nicer declarative syntax. So my docker
stage looked like this:
1 | stage('docker') { |
The login step kept succeeding with “Login successful” while the second stage failed.
As I usually do with Jenkins, I first try to translate the syntax to scripted pipelines that are then reproducible in a shell. Here is the second stage rewritten:
1 | docker.withRegistry('https://registry.address', 'docker-registry') { |
This failed again. So I headed over to the shell to run these commands, which failed again. I finally realized that the login command ran with sudo
whereas the push command did not. After fixing it by making both stages run with sudo
, it worked! Although I could also have removed the sudo
on the login step instead.
What I learned is that, once again, declarative pipelines are unreliable because you cannot know what code really lies behind the configuration like you would with scripted pipelines.
]]>So how do we use CommonJS Node modules (or other non-Angular modules) with Angular 2+?
Make sure the package actually exists in the project. Let’s assume it is under node_modules
.
The first thing to do is to head over the angular.json
file (or angular-cli.json
for Angular 5 and under) and add the script. For example with the jsoneditor
and bootstrap
packages under node_modules
:
1 | { |
I just printed the relevant hierarchy here. So pay attention not to insert the script to the test
configuration like I stupidly did. And keep in mind that the path is relevant to the project’s sourceRoot
.
Try using the class in your Angular module. Because the script was added in angular.json
, the code should work, but the compiler will complain that the class is unknown. And for good reason, since there is no way to import it yet.
Now create typings.d.ts if it does not exist. And add a declaration for your class, for example with jsoneditor
‘s class called JSONEditor
:
1 | declare var ClassName: any; |
Much like a C/C++ header, this is a way of telling the compiler “Don’t worry, I’ll provide an implementation of that class at runtime”.
Now you should be able to import the class in your module, service or other without problem.
]]>