Pretty strongly depends on the details of the setting. It's pretty much all about control.
I'd think about it if
1. the technology unambiguously supports our consciousness, rather than being some kind of imitation (whatever breakthroughs in philosophy or metaphysics are required to do that). I'm not convinced that continuity and squishy neurons are absolutely necessary for existence or consciousness (then again, as pointed out, we don't really know what it is yet). When we're deeply sedated, our cognition stops, then starts up again. We can turn ourselves on and off with chemistry, but don't have an existential crisis every time we wake up from that. We just can't copy ourselves or multithread or anything yet.
2. The technology is entirely under the end user's total control, from the transistors up. (If I have to run on anyone else's hardware, if there is even a hint of vendor lock in or encroachment, hard no. I'd do this only if the parts are as widely available and interchangeable as modern PC stuff or standard stepper motors. The transhuman user has to be the sovereign owner and absolute root controller of the technology involved.
3. Some kind of embodiment and control over physical circumstances (robots, androids, etc) are available. You're not just a helpless brain in a box, since that's also far more vulnerability than you should place yourself under.
4. I'm getting old and my current body is wearing out. If the alternative is frailty and death anyway, then sure, let's try the robot thing.
5. Dying is always an option. (Again, control. The nightmare scenario is being trapped against your will in a hellish circumstance, which is something I'd think robot people would have to worry about.) Nature's cruelty pretty much ends in death, and thats its furthest extent. It's only mankind that has attempted to create hell.