love
cat -n
, when working with csv files I often use a command like this to figure out which column I need:head -n1 file.csv | sed 's/,/\n/g' | cat -n
Sweet
I feel like you can’t compare these tools without talking about
cut
. I personally never useawk
, butcut
and other coreutils can be used together to achieve much of the samefor me it’s grep but i’m amazed how many searching tools there are on Linux. sed, grep, ripgrep, cat, find, walk, sor, locate, awk, etc.
Thanks for the info!
Okay, so
cat
is all I need, right?…right?
I pretty much always use these commands unless I need regex or something. I think it’s a lot more maintainable by other people since awk and sed have their own unique syntax.
What is it you want to do?
just trying to get a good mental model of when it’s reasonable to use tools like awk instead of simpler unix tools. also further confirming that sed is almost never the best tool except for substitutions.
Sed and awk have a lot of overlap. Another thing to consider is that neither might be the right choice if you are inside a bash script since spawning a new process in a tight loop can be very expensive and bash’s built-in regex operations can have much better performance in those situations.
That’s a good point. If I need something like a bash script I tend to stick to bash features as much as possible.
Ok, I would say… cat is mostly used for output of files, and concatenating files. sed and awk are good at reshaping the output of files or piped std input. They both require some time to learn, but it is worth going back to them.
cat
while read -r l; do echo "$l"; done <
cat -e
while read -r l; do echo "$l"$; done <
cat -n
n=0; while read -r l; do n="$((n+1))"; printf '%5d %s\n' "$n" "$l"; done <
cat -b
n=0; while read -r l; do [ -n "$l" ] && n="$((n+1))" && printf '%5d %s' "$n" "$l"; echo; done <
$ n=0; while read -r l; do n="$((n+1))" printf ' %d %s\n' "$n" "$l"; done < /etc/os-release 0 NAME="openSUSE Tumbleweed" 0 # VERSION="20230619" 0 ID="opensuse-tumbleweed" [...]
; was missing