Jump to content

HappyPolygon

Premium Member
  • Posts

    1,912
  • Joined

  • Last visited

  • Days Won

    97

Everything posted by HappyPolygon

  1. Did it ! Choose the weight tool Go to polygon mode Choose the Spine1 joint The viewport will look like this Now in the weight tool attributes change the following Now paint all over the bright orange area until it's completely black (the tool paints points so aim those and not the polygons) Done! That bone should not have any effect on the character. It should be used as a helper to move it (I don't know why the artist made two instead of one joints for that)
  2. This is so meta 😂... Zion, according to The Architect, has been completely destroyed five times by the time of the sixth One, Neo, meets The Architect. As a result, the actual year on Earth is estimated to be closer to 2699, not 2199. Well... let's call it Alpha and will count as the first true Matrix launch ...
  3. You might have misunderstood me. If you want to apply only one material you don't have to check all those selections. Only uncheck the Colorize Fragments. This is how your tombstone will look like if you don't use any VF selections. But with the Inside Faces selection checked you can apply a new material for the inner part of the fragments. This is what those Selections mean. Actually only 2 of them are polygon selections and thus can be assigned a material:
  4. The Voronoi Fracture object uses polygon selections to assign materials. Check all selections you want to apply different materials to. But for a single material assignment just uncheck the Colorize Fragments.
  5. The black on the base are too black Although blurred, the reflection is too strong. The shadows are too transparent for this kind of strong sun light.
  6. Forget AI images like these that look like quick Corel Painter concept art or quick PS compositions with DoF covered backgrounds : Today the San Francisco-based lab announced DALL-E’s successor, DALL-E 2. It produces much better images, is easier to use, and—unlike the original version—will be released to the public (eventually). DALL-E 2 may even stretch current definitions of artificial intelligence, forcing us to examine that concept and decide what it really means. "Teddy bears mixing sparkling chemicals as mad scientists, steampunk" / "A macro 35mm film photography of a large family of mice wearing hats cozy by the fireplace" Image-generation models like DALL-E have come a long way in just a few years. In 2020, AI2 showed off a neural network that could generate images from prompts such as “Three people play video games on a couch.” The results were distorted and blurry, but just about recognizable. Last year, Chinese tech giant Baidu improved on the original DALL-E’s image quality with a model called ERNIE-ViLG. DALL-E 2 takes the approach even further. Its creations can be stunning: ask it to generate images of astronauts on horses, teddy-bear scientists, or sea otters in the style of Vermeer, and it does so with near photorealism. The examples that OpenAI has made available (see below), as well as those I saw in a demo the company gave me last week, will have been cherry-picked. Even so, the quality is often remarkable. "One way you can think about this neural network is transcendent beauty as a service,” says Ilya Sutskever, cofounder and chief scientist at OpenAI. “Every now and then it generates something that just makes me gasp." Diffusion models are trained on images that have been completely distorted with random pixels. They learn to convert these images back into their original form. In DALL-E 2, there are no existing images. So the diffusion model takes the random pixels and, guided by CLIP, converts it into a brand new image, created from scratch, that matches the text prompt. The diffusion model allows DALL-E 2 to produce higher-resolution images more quickly than DALL-E. “That makes it vastly more practical and enjoyable to use,” says Aditya Ramesh at OpenAI. In the demo, Ramesh and his colleagues showed me pictures of a hedgehog using a calculator, a corgi and a panda playing chess, and a cat dressed as Napoleon holding a piece of cheese. I remark at the weird cast of subjects. “It’s easy to burn through a whole work day thinking up prompts,” he says. "A sea otter in the style of Girl with a Pearl Earring by Johannes Vermeer" / "An ibis in the wild, painted in the style of John Audubon" DALL-E 2 still slips up. For example, it can struggle with a prompt that asks it to combine two or more objects with two or more attributes, such as “A red cube on top of a blue cube.” OpenAI thinks this is because CLIP does not always connect attributes to objects correctly. As well as riffing off text prompts, DALL-E 2 can spin out variations of existing images. Ramesh plugs in a photo he took of some street art outside his apartment. The AI immediately starts generating alternate versions of the scene with different art on the wall. Each of these new images can be used to kick off their own sequence of variations. “This feedback loop could be really useful for designers,” says Ramesh. One early user, an artist called Holly Herndon, says she is using DALL-E 2 to create wall-sized compositions. “I can stitch together giant artworks piece by piece, like a patchwork tapestry, or narrative journey,” she says. “It feels like working in a new medium.” User beware DALL-E 2 looks much more like a polished product than the previous version. That wasn’t the aim, says Ramesh. But OpenAI does plan to release DALL-E 2 to the public after an initial rollout to a small group of trusted users, much like it did with GPT-3. (You can sign up for access here.) GPT-3 can produce toxic text. But OpenAI says it has used the feedback it got from users of GPT-3 to train a safer version, called InstructGPT. The company hopes to follow a similar path with DALL-E 2, which will also be shaped by user feedback. OpenAI will encourage initial users to break the AI, tricking it into generating offensive or harmful images. As it works through these problems, OpenAI will begin to make DALL-E 2 available to a wider group of people. OpenAI is also releasing a user policy for DALL-E, which forbids asking the AI to generate offensive images—no violence or pornography—and no political images. To prevent deep fakes, users will not be allowed to ask DALL-E to generate images of real people. "A bowl of soup that looks like a monster, knitted out of wool" / "A shibu inu dog wearing a beret and black turtleneck" As well as the user policy, OpenAI has removed certain types of image from DALL-E 2’s training data, including those showing graphic violence. OpenAI also says it will pay human moderators to review every image generated on its platform. “Our main aim here is to just get a lot of feedback for the system before we start sharing it more broadly,” says Prafulla Dhariwal at OpenAI. “I hope eventually it will be available, so that developers can build apps on top of it.” Creative intelligence Multiskilled AIs that can view the world and work with concepts across multiple modalities—like language and vision—are a step towards more general-purpose intelligence. DALL-E 2 is one of the best examples yet. But while Etzioni is impressed with the images that DALL-E 2 produces, he is cautious about what this means for the overall progress of AI. “This kind of improvement isn’t bringing us any closer to AGI,” he says. “We already know that AI is remarkably capable at solving narrow tasks using deep learning. But it is still humans who formulate these tasks and give deep learning its marching orders.” For Mark Riedl, an AI researcher at Georgia Tech in Atlanta, creativity is a good way to measure intelligence. Unlike the Turing test, which requires a machine to fool a human through conversation, Riedl’s Lovelace 2.0 test judges a machine’s intelligence according to how well it responds to requests to create something, such as “A picture of a penguin in a spacesuit on Mars.” DALL-E scores well on this test. But intelligence is a sliding scale. As we build better and better machines, our tests for intelligence need to adapt. Many chatbots are now very good at mimicking human conversation, passing the Turing test in a narrow sense. They are still mindless, however. But ideas about what we mean by “create” and “understand” change too, says Riedl. “These terms are ill-defined and subject to debate.” A bee understands the significance of yellow because it acts on that information, for example. “If we define understanding as human understanding, then AI systems are very far off,” says Riedl. “But I would also argue that these art-generation systems have some basic understanding that overlaps with human understanding,” he says. “They can put a tutu on a radish in the same place that a human would put one.” Like the bee, DALL-E 2 acts on information, producing images that meet human expectations. AIs like DALL-E push us to think about these questions and what we mean by these terms. OpenAI is clear about where it stands. “Our aim is to create general intelligence,” says Dhariwal. “Building models like DALL-E 2 that connect vision and language is a crucial step in our larger goal of teaching machines to perceive the world the way humans do, and eventually developing AGI.”
  7. There are two models. They are both modeled on XZ axis but the rig is on the YZ axis. The second model is fine. Yeah, maybe sharing the original model instead of the c4d file could help more. I've noticed moving the ride01 bone a bit lower will add a bit more of volume. Setting the Length to Uniform Scale in the skin object also puts some more volume there. MIGHT might be right here 'cause the second model is on the same plane as the bones.
  8. As we don't have the file to tinker have a look at this solution and tell us if it helped.
  9. Just realized the post was under "Houdini" Well now I know it's not an S26 feature. Fingers crossed for R27
  10. You both have NDAs with MAXON ? How many are you ? I'm jealous, want one too !
  11. It's probably going to be a voxel cloud. The network know what is where it just looses quality to represent it perfectly if there are not enough angles of it. As seen from the 2-minute paper many far-away elements or overly complex objects like trees have a bad JPEG look on them. But the whole conversion process will look like the current method of object reconstruction. It won't be efficient to convert to polygons,
  12. No, not yet. But you can do the opposite. (use a shader to influence hair bend)
  13. You could use an XPresso to automatically move the second light in the opposite direction. You basically rig your own Symmetry tag.
  14. I would be happy if nothing like this ever got to be part in C4D scene nodes. The main reason to use nodes should be to construct rules for adaptive generative modeling, not static modeling. He is doing everything the hard way. There is no serious reason to favor nodes over already available native tools. If he had made his Cathedral adaptive I would be impressed. The only adaptive thing he ever did in his channel was the Christmas cable in the Mouse House. As for the content, as I said before, more content will emerge as people are willing to experiment and learn the new tool. When there is no content you make some. Someone has to make the first step. Maybe that one is you.
  15. Have you seen any good geometry nodes Blender videos out there ? Because all I have seen is either short showcases and RnD without explaining how they did it or basic staff like how to distribute clones on surfaces. I haven't seen any tutorial on geometry nodes about anything C4D is not already capable of. In fact the main reason Blender developed Geometry Nodes was to compete for C4D's Fields and MoGraph. Most Blender users use GeometryNodes for something we've been doing for mor than 3 years. The most advanced GeometryNodes videos out there are about procedural buildings in the form of kit-bashing, something that Srek has told me is already possible with SceneNodes. There's a NAB or Motion Graphics video where Noseman explains how to build a procedural building using only cloners. People see Geometry Nodes and fail to see why nodes were really developed for other DCCs. Nodes offer a parametric, procedural, non-destructive way of building scenes or objects. C4D was already offering parametric, non-destructive and procedural tools, the problem was that not all tools where build or eventually became parametric and non-destructive. For example Spline Chamfer and Spline Outline are not non-destructive. In fact any orange colored tool is destructive and/or non-parametric. Some years ago Extruding was a destructive function, now we have it in the form of a deformer covering all the inconveniences. MAXON developed SceneNodes not because we lacked the modern scene-building workflows but because the improvement of the old tools would result in 30+ new deformers and generators doing the same thing as their orange twin counterparts. Having so many deformers and generators in a top-down hierarchy in the OM has its own drawbacks. The node system offers a way to arrange dependencies in 2D allowing for more complexity, using less tools to affect more things at the same time in less room. Patience, 20 April we'll have the new mid-term release and we will know how much things have advanced. C4D is a commercial product and always aimed at simple, fast and intuitive solutions. I prefer having an easy to use node system completed in 5 years rather a hard one right now. And right now I'm having both.
  16. Doesn't matter, upload the file for others to see.
  17. You can also export the new volumetric object as VDB and re-import it as a stand-alone object. This can help your scene be less populated with generators that might also run computations in the background.
  18. I don't have a video. I have some threads. MODODO has made some serious stuff in here. And Chris Shmidt from Rocket Lasso has provided two Node Assets that are completely out of this world in terms of complexity and usefulness. The reason why you don't see many or any videos about C4D nodes is because C4D users have been used to a certain type of workflow and thinking that is completely different from the node system. There was a time where everyone was afraid of XPresso (me included), now most people know how to use it. Same with the SceneNodes. As the time passes and the system gets more mature more people will start using it. Currently he manual doesn't really help with nodes as their usability is not well documented with examples and screenshots being outdated from the previous versions. We expect many updates there soon. Blender has millions of users as it is the main application used by students and hobbyists from various fields not only CGI related. It is expected for a certain number of people to emerge with some very elaborate showcases. C4D has a much smaller user power so people with node showcases are very fewer.
  19. I thought those had an immediate effect on the object without any Mesher... damn I need to find a small VDB for testing purposes.
  20. Is the Mesher necessary ? Doesn't the VDB work directly under the VB ?
×
×
  • Create New...