Nice slides. I missed him covering ripgrep and friends, which is very useful to avoid slower find-grep queries.
Along with ripgrep, I think GNU Parallel (covered in the slides) and htop (briefly discussed) are great additions to any Unix development environment.
The rest of the standard utilities have stood the test of time surprisingly well. But top is a bit wonky, xargs has many pitfalls and, as I said, ripgrep is great for speeding up some find-grep workflows.
Does fd actually cover all the same ground as find? I used to be intimidated by find, but these days I find it’s extremely useful for all sorts of otherwise complicated file location operations. For example, to find files newer than a certain date, just use ‘touch’ to set the date of a temp file and then ‘find -newer temp’. Or, I have a script that deduplicates all the regular files in a tree by hard linking them to a file named by the sha256 of the file’s contents ‘find -type f | ( while read -r fn; do ...; done )’ and then, after deleting files or whatever, I can “garbage collect” the shas by deleting files in my link farm that only have one hard link ‘find -links 1 -print -delete’
Much the same here, but I tried fd the other week, and for simple searches it definitely feels much quicker and "lighter", so worth adding to the toolkit imo.
Along with ripgrep, I think GNU Parallel (covered in the slides) and htop (briefly discussed) are great additions to any Unix development environment.
The rest of the standard utilities have stood the test of time surprisingly well. But top is a bit wonky, xargs has many pitfalls and, as I said, ripgrep is great for speeding up some find-grep workflows.