The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
dd is no fun any more. Most of my desktops are NVMe so as long as I use /dev/sd* then it's probably a USB device. I don't do dd much on the systems with all the spinny disks.
It was much more fun on things like Solaris where it was /dev/disk/c#t#d#s# instead of something simple like /dev/sda
Good times.
 
dd is no fun any more. Most of my desktops are NVMe so as long as I use /dev/sd* then it's probably a USB device. I don't do dd much on the systems with all the spinny disks.
It was much more fun on things like Solaris where it was /dev/disk/c#t#d#s# instead of something simple like /dev/sda
Good times.
All my drives except / are /disk/0 /disk/1 /disk/2 etc (based on their physical location in the raid array), being mergerfs to /data

I use the UUID numbers to set the drives because the /dev/sd* labels seem to change if I add a new drive and reboot, and that's too sketchy to rely on.
 
Some say that because devices are just files you probably don't need dd:
conv=fsync and status=progress would like a word

Also, 128K for cat's buffer still sounds a little low, but I'm far too lazy to benchmark it.

I just use mv or cp or rsync because filesystems have evolved to the point that you don't need the low level cloning that dd does these days.
Yea, you're not the target market.
I most often use it to prepare USB sticks or SD Cards from iso or image files for booting specialized stuff like Raspberry Pis or recovery images. Also it's a good tool for data recovery to skip bad blocks on a copy.
Even on cloning it can be faster on drives with lots of tiny files that makes rsync sad.
 
Speaking of fun with NVMe, don't try this at home, but it worked fine.
My laptop space was getting short and had a 1T NVMe, but 2 slots. So I installed a 2T in the main slot, and moved the 1T over.

I was going to use Clonezilla but for some reason it was giving me grief. I resorted to a Debian Live USB.
Boot system. Copy partition table as-is from old drive to new drive with sgdisk, which is already scary.
Then instead of mounting each partition, making a new fs and copying the files, which wouldn't have worked for the Bitlockered Windows Partition anyway. I simply did:
for i in /dev/nvme1n1p? ; do dd if=$i of=`echo $i | sed 's/nvme1n1/nvme0n1/'` bs=8192k status=progress ; done
Which at NVMe speeds only took about 15 minutes.
Now there's one you don't want to screw up. But it worked fine and then I used gparted to resize the partitions. And then booted Windows to expand its drive.
 
Would it kill someone to add a prompt into dd like "Write $FILE to /dev/sdb?" I'm not excusing my retardation that day, but that would have stopped me. I was lucky to have made a backup of my external a few months earlier but it still was a lesson I will never forget.
The closest you can get with minimal effort. Shove this in your ~/.profile.
Bash:
alias dd="read -p \"Press ENTER once you have checked your entered command\" && dd "
 
The closest you can get with minimal effort. Shove this in your ~/.profile.
Bash:
alias dd="read -p \"Press ENTER once you have checked your entered command\" && dd "
You can also use a function. Here's one I ripped from stackoverflow, but chatGPT is genuinely great at this kind of thing so you can ask it to customise it with whatever you like.
Code:
dd() {
  # Limit variables' scope
  local args command output reply

  # Basic arguments handling
  while (( ${#} > 0 )); do
    case "${1}" in
    ( of=* )
      output="${1#*=}"
      ;;
    ( * )
      args+=( "${1}" )
      ;;
    esac
    shift || break
  done

  # Build the actual command
  command=( command -- dd "${args[@]}" "of=${output}" )

  # Warn the user
  printf 'Please double-check this to avoid potentially dangerous behavior.\n' >&2
  printf 'Output file: %s\n' "${output}" >&2

  # Ask for confirmation
  IFS= read -p 'Do you want to continue? (y/n): ' -r reply

  # Check user's reply
  case "${reply}" in
  ( y | yes )
    printf 'Running command...\n' >&2
    ;;
  ( * )
    printf 'Aborting\n' >&2
    return
    ;;
  esac

  # Run command
  "${command[@]}"
}
 
Also it's a good tool for data recovery to skip bad blocks on a copy.
ddrescue comes in handy there (ubuntu packages it as gddrescue for some stupid reason), as it automagically skips bad blocks and then attempts to retrieve them after all the good data is rescued. It also allows for resumes, which is incredibly useful. I've used it twice, but both times it was the difference between an hour or two of downtime, or a full day reinstalling everything from memory and watching out for the inevitable collapse because I missed some minor configuration quirk. I suppose it might be less useful now that the age of spinning rust is drawing to a close.
 
Even on cloning it can be faster on drives with lots of tiny files that makes rsync sad
ngl i had a similar problem with a massive 1tb file.
i noticed that rsync was taking up a lot of cpu usage and took longer then it was susposed to (its about 3000 files totaling about 2.5gb).
thinking rsync was not behaving i used nice to try and limit its cpu usage but no dice.
so i ran the command in a terminal with --status to see what it was getting hung up on.
a loaned behold it was the 1tb that rsync was told to exclude.
thinking i typed the command wrong i checked it and didn't see anything weird.
as it turns out the file name in the rsync command was spelt correctly but the file itself wasn't.
now it behaves as its supposed to. only downside now is that the ntfs driver for the backup drive is misbehaving and is taking up a lot of cpu usage.
i know people will sugest thst i swap it over to ext4 or something but i need the backup drive to be accessible on any device in the event my house burns down or something.
and since it also stores the backup of my media library with files in excess of 4gb, fat32 isn't an option so ntfs was my only choice.
if any of you boys have any suggestions with fstab mount options that could help with it then by all means suggest away, i have also tried big_writes to no avail.
 
Is there a tool that can continually check if there's an internet connection and maybe do periodic speed tests and store it in a lot?
 
Back