It's the opposite though, as they have, thus far, been extremely careful to use only their own licensed material for training, and have pledged to compensate contributors to their stock library (that is being used for training their AI models). As with all things Adobe, we'll have to see how that pledge actually plays out, but so far all indicators are that they want to play fair.
I suppose we will have to define what "significant human intervention" means. There already exist many apps that can produce compelling imagery at the click of a button (that don't use any AI). This is a much broader topic, and likely exceeds the scope of a single forum post, but the last 40 years of hardware and software progress have made it easier and easier to produce better and better imagery, with less human intervention.
Rendering is literally a process whereby you press a button to relinquish human control to a computer to produce the final output. Yes, there has to be initial prep before you press that button, but all that prep is basically translating a human thought into a state for the computer to output. My stance is that it's the same with AI — except that it just makes all that prep work less of a chore. Basically, a tool that removes more technical barriers between an idea and an output.
Replacing a tedious, technical process isn't necessarily a bad thing in the pursuit of content creation. There is definitely purity and value in pure mechanical and technical prowess, but in a capitalist world, easier methods will always take general precedence, especially if they can reduce costs. For me, a "significant human intervention" needs only be the creative idea or direction that a human can come up with. Being able to translate that effortlessly onto a certain medium should not detract from that.
I'm in the beta, yes, and have played with it a bit. It's promising, but extremely limited in what it can currently do. Each generation is random - any further tweaking will require traditional methods. It also struggles with people, especially non-Caucasians. It's also heavily censored. I couldn't make an "exploding text" effect, because the word "explode" is banned by them, so good luck trying to make a "cool guys don't look at explosions" poster.
I think Adobe's approach of adapting Firefly tech into its core apps (like Generative Fill in Photoshop, and the vector recoulouring in Illustrator) is the correct approach. By making it a tool that can do a certain something, and be complemented by the existing toolset, it works a lot better for actual work, rather than being a self-contained quasi-random image generator that's only useful for generating concepts, and falls apart when you get any specific notes/feedback.
I don't believe there are any generation limits. It's online only, but I imagine that the final product should be able to work offline, at least optionally.
If it works offline (like it should) I wouldn't want to pay anything above the existing CC sub. If it's online, I can imagine it's fair for them to charge an extra amount, I have no idea how that might be structured though. Having said that, I would be miffed if it was online only. There's zero reason for them to do that, except be greedy (and this is Adobe, so I do actually expect that).