I'm personally firmly of the belief that platform providers should not attempt to police creative expression, including sexuality-based expression, that is legal and non-hateful. Proactively blocking access to content is not a sensible way to build services that are useful to the world at large. Providers should respect users' preferences if they ask not to see certain types of content. But providers should default to showing users information that they've specifically requested instead of behaving in a paternalistic manner.
Permitting freedom of expression about sexuality and access to age-appropriate information on sexuality is beneficial to users. Users are best served with information that is appropriate to their needs. This includes the freedom to express non-mainstream sexuality, express it in non-mainstream ways, and access information published by others about non-mainstream sexuality. Platforms should not attempt to censor legally permissible expression related to legal, consensual activity.
There are tremendous negative public health consequences associated with sex-negativity. To many people, publishing platforms and search tools are their primary gateway to information. Censoring information related to sex kills people. Blindly censoring terms such as "condom", "bisexual", or "transgender" results in less availability of health and wellness-critical information. Excluding sex-positivity causes people with non-mainstream sexual identities to feel excluded/isolated, leading to significant guilt, shame, and depression. Images/video are important to discussions of sex-positivity - for example, tying people up the wrong way can cause serious injury or death. Posting images and video of how to do it safely is not gratuitous or a luxury/privilege to be taken away. Users will engage in sexual activity regardless of providers' approval, so providers should serve them by informing them.
If a user is logged into a platform and has elected to see or not see explicit content, platforms should respect and follow their preferences. But if there is no information on a user due to their choice to not log in, platforms should default to showing content (with an interstitial). Of course, platforms should provide age-appropriate content to someone if their age is known - thus, asking for anonymous individuals’ ages and directing them to a resource like Scarleteen is a great user experience, but it should not be required to give up your privacy to view explicit content. Requiring users to log in to view content relating to sexual health makes that content inaccessible to the most vulnerable users. Users feel chilled when they must associate accounts, email, and their names with viewing content that others label morally objectionable. Requiring individuals to log in creates digital footprints that put users who want to keep their behavior secret at risk of repercussions.
It's impossible to actually create a bright line for enforcement of moral standards based content policies. Deciding the boundary between sex education and porn is extremely difficult to get right, and incurs numerous false positives that cause people to avoid platforms entirely - for example, videos that are clearly entirely educational frequently are blocked on platforms and not restored on appeal. Even the most enlightened policy rules will occasionally be applied incorrectly (e.g. breastfeeding? STI prevention / education? legitimate sex education? that is supposedly allowed under the platform's terms of service). Rule-abiding users that are remotely unsure of whether their content might get blocked (and whether they will be able to appeal successfully, and whether there will be collateral consequences for the rest of their account) will avoid platforms with ambiguity about their commitment to free speech. Moreover, a discretionary standard is inherently more vulnerable to organized pressure campaigns from censorship-minded governments and interest groups than a principled commitment to freedom of expression.
Even permitting 'non-commercial speech' without limitation isn't enough. Denying businesses access to platforms on basis of the type of non-abusive content they host is a form of redlining. We are, unfortunately, moving towards a world in which only "morally approved" types of business can be conducted using the best possible tools, and everyone doing anything remotely suspect must pay extra for inferior tools or make do without them entirely. For instance, +Kristen Stubbs had immense trouble finding a payment processor that would serve their nonprofit at any price, because of the nature of their business: crowdfunding for sex toys. Imagine if they couldn’t find hosting either, because no web host would allow them to host images of the products that their customers would be buying. Platform providers contribute to that morality policing by closing doors to customers that can't abide by their restrictive terms of service.
(all opinions strictly mine, and not necessarily those of Google. I do not speak for my employer. seriously, I do not speak for my employer.)