-
Posts
1,898 -
Joined
-
Last visited
-
Days Won
94
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by HappyPolygon
-
How can I write this using a Switch-Case node and without variable assigments ? (I used the first assignment just to avoid writing cube1.globalrotation everywhere) rotationB = cube1.globalrotationB() if (rotationB>=0 and rotationB<90) or (rotationB>=180 and rotationB<270): cube2.globalrotationB() = rotationB else if (rotationB>=90 and rotationB<180) or (rotationB>=270 and rotationB<360): cube2.globalrotationB() = -rotationB
-
@srek This node didn't really solve the problem. The Node outputs only discrete values. In my setup I want to check when my value is <90 or >90 flag that comparison (this case is a binary so a bool fits but could be more options), and tell the Condition Node to execute different steps depending the input. This means that the Condition has to switch to one of many possible outputs OR I need to make a special IF to make the Compare Nodes output the values 1, -1 to use them at the end. Wasn't XPresso designed to carry such operations ? Maybe I do need to use a Python Node, but it seems so fundamentally simple what I want to do with only one node ... If there is no IF equivalent it's impossible to have conditions based on inequalities (>, <, <=, >=) since the Switch-Case statement is evaluated only for equalities (==). Do we know if XPresso is Turing-complete ? (without using the Python Node)
-
It works fine for me. In your screenshot the Snap mode is not enabled. Did you forget to enable it ?
-
So So actually the Condition Node is the equivalent of the Switch-Case in C/C++ ? switch (expression){ case constant1: // statements break; case constant2: // statements break; . . . default: // default statements }} Thanks Srek. I always found that node counterintuitive because I was trying to equate it with the IF condition and the fact that I could have more than one Output port hurt my brain.
-
Actually I wanted to help Zeezy I thought about using Expresso to make that type of animation, and might in the process learning one or two things... But I stumbled across a problem I constantly avoid because I don't know how to solve. And that's conditional branching. For the particular problem I wanted to partition a circle to 4 quadrants. From 0 to 90, 90 to 180, 180 to 270, 270 to 360, when the motor is on the 1st and 3rd quadrant the hammer should be rotating to the right, when the motor is on the 2nd and 4th quadrant the hammer should rotate to the left. basically it's the abs(sin(x)) function but interpreted as degrees based on the rotation of the motor. I couldn't do it with the function node (to translate it to Global Rotation B that is) or the python node so I went full relational programming. my setup is completely off at the moment , I know.
-
I don't understand how to use the Switch Node. It has no input port and Outputs only Bool.
-
No, I want to make an XPresso rig.
-
What nodes can I use to emulate the if x==0 : # do something else if x==1 : # do something else ? The Condition node doesn't seem to help or I don't get how it should be rigged.
-
That's a new level of XPressoing right there...
-
No, leave it as it is just put it under the Voronoi Fracture and make sure the Colorize Fragments is off
-
Hello Henry, please fill your profile with the adequate information. We can help you better knowing the version of C4D you're using.
-
Mesh appears glitched with Skin ON. FBX imported asset c4d.
HappyPolygon replied to a topic in Cinema 4D
Did it ! Choose the weight tool Go to polygon mode Choose the Spine1 joint The viewport will look like this Now in the weight tool attributes change the following Now paint all over the bright orange area until it's completely black (the tool paints points so aim those and not the polygons) Done! That bone should not have any effect on the character. It should be used as a helper to move it (I don't know why the artist made two instead of one joints for that) -
This is so meta 😂... Zion, according to The Architect, has been completely destroyed five times by the time of the sixth One, Neo, meets The Architect. As a result, the actual year on Earth is estimated to be closer to 2699, not 2199. Well... let's call it Alpha and will count as the first true Matrix launch ...
-
You might have misunderstood me. If you want to apply only one material you don't have to check all those selections. Only uncheck the Colorize Fragments. This is how your tombstone will look like if you don't use any VF selections. But with the Inside Faces selection checked you can apply a new material for the inner part of the fragments. This is what those Selections mean. Actually only 2 of them are polygon selections and thus can be assigned a material:
-
The Voronoi Fracture object uses polygon selections to assign materials. Check all selections you want to apply different materials to. But for a single material assignment just uncheck the Colorize Fragments.
-
The black on the base are too black Although blurred, the reflection is too strong. The shadows are too transparent for this kind of strong sun light.
-
Forget AI images like these that look like quick Corel Painter concept art or quick PS compositions with DoF covered backgrounds : Today the San Francisco-based lab announced DALL-E’s successor, DALL-E 2. It produces much better images, is easier to use, and—unlike the original version—will be released to the public (eventually). DALL-E 2 may even stretch current definitions of artificial intelligence, forcing us to examine that concept and decide what it really means. "Teddy bears mixing sparkling chemicals as mad scientists, steampunk" / "A macro 35mm film photography of a large family of mice wearing hats cozy by the fireplace" Image-generation models like DALL-E have come a long way in just a few years. In 2020, AI2 showed off a neural network that could generate images from prompts such as “Three people play video games on a couch.” The results were distorted and blurry, but just about recognizable. Last year, Chinese tech giant Baidu improved on the original DALL-E’s image quality with a model called ERNIE-ViLG. DALL-E 2 takes the approach even further. Its creations can be stunning: ask it to generate images of astronauts on horses, teddy-bear scientists, or sea otters in the style of Vermeer, and it does so with near photorealism. The examples that OpenAI has made available (see below), as well as those I saw in a demo the company gave me last week, will have been cherry-picked. Even so, the quality is often remarkable. "One way you can think about this neural network is transcendent beauty as a service,” says Ilya Sutskever, cofounder and chief scientist at OpenAI. “Every now and then it generates something that just makes me gasp." Diffusion models are trained on images that have been completely distorted with random pixels. They learn to convert these images back into their original form. In DALL-E 2, there are no existing images. So the diffusion model takes the random pixels and, guided by CLIP, converts it into a brand new image, created from scratch, that matches the text prompt. The diffusion model allows DALL-E 2 to produce higher-resolution images more quickly than DALL-E. “That makes it vastly more practical and enjoyable to use,” says Aditya Ramesh at OpenAI. In the demo, Ramesh and his colleagues showed me pictures of a hedgehog using a calculator, a corgi and a panda playing chess, and a cat dressed as Napoleon holding a piece of cheese. I remark at the weird cast of subjects. “It’s easy to burn through a whole work day thinking up prompts,” he says. "A sea otter in the style of Girl with a Pearl Earring by Johannes Vermeer" / "An ibis in the wild, painted in the style of John Audubon" DALL-E 2 still slips up. For example, it can struggle with a prompt that asks it to combine two or more objects with two or more attributes, such as “A red cube on top of a blue cube.” OpenAI thinks this is because CLIP does not always connect attributes to objects correctly. As well as riffing off text prompts, DALL-E 2 can spin out variations of existing images. Ramesh plugs in a photo he took of some street art outside his apartment. The AI immediately starts generating alternate versions of the scene with different art on the wall. Each of these new images can be used to kick off their own sequence of variations. “This feedback loop could be really useful for designers,” says Ramesh. One early user, an artist called Holly Herndon, says she is using DALL-E 2 to create wall-sized compositions. “I can stitch together giant artworks piece by piece, like a patchwork tapestry, or narrative journey,” she says. “It feels like working in a new medium.” User beware DALL-E 2 looks much more like a polished product than the previous version. That wasn’t the aim, says Ramesh. But OpenAI does plan to release DALL-E 2 to the public after an initial rollout to a small group of trusted users, much like it did with GPT-3. (You can sign up for access here.) GPT-3 can produce toxic text. But OpenAI says it has used the feedback it got from users of GPT-3 to train a safer version, called InstructGPT. The company hopes to follow a similar path with DALL-E 2, which will also be shaped by user feedback. OpenAI will encourage initial users to break the AI, tricking it into generating offensive or harmful images. As it works through these problems, OpenAI will begin to make DALL-E 2 available to a wider group of people. OpenAI is also releasing a user policy for DALL-E, which forbids asking the AI to generate offensive images—no violence or pornography—and no political images. To prevent deep fakes, users will not be allowed to ask DALL-E to generate images of real people. "A bowl of soup that looks like a monster, knitted out of wool" / "A shibu inu dog wearing a beret and black turtleneck" As well as the user policy, OpenAI has removed certain types of image from DALL-E 2’s training data, including those showing graphic violence. OpenAI also says it will pay human moderators to review every image generated on its platform. “Our main aim here is to just get a lot of feedback for the system before we start sharing it more broadly,” says Prafulla Dhariwal at OpenAI. “I hope eventually it will be available, so that developers can build apps on top of it.” Creative intelligence Multiskilled AIs that can view the world and work with concepts across multiple modalities—like language and vision—are a step towards more general-purpose intelligence. DALL-E 2 is one of the best examples yet. But while Etzioni is impressed with the images that DALL-E 2 produces, he is cautious about what this means for the overall progress of AI. “This kind of improvement isn’t bringing us any closer to AGI,” he says. “We already know that AI is remarkably capable at solving narrow tasks using deep learning. But it is still humans who formulate these tasks and give deep learning its marching orders.” For Mark Riedl, an AI researcher at Georgia Tech in Atlanta, creativity is a good way to measure intelligence. Unlike the Turing test, which requires a machine to fool a human through conversation, Riedl’s Lovelace 2.0 test judges a machine’s intelligence according to how well it responds to requests to create something, such as “A picture of a penguin in a spacesuit on Mars.” DALL-E scores well on this test. But intelligence is a sliding scale. As we build better and better machines, our tests for intelligence need to adapt. Many chatbots are now very good at mimicking human conversation, passing the Turing test in a narrow sense. They are still mindless, however. But ideas about what we mean by “create” and “understand” change too, says Riedl. “These terms are ill-defined and subject to debate.” A bee understands the significance of yellow because it acts on that information, for example. “If we define understanding as human understanding, then AI systems are very far off,” says Riedl. “But I would also argue that these art-generation systems have some basic understanding that overlaps with human understanding,” he says. “They can put a tutu on a radish in the same place that a human would put one.” Like the bee, DALL-E 2 acts on information, producing images that meet human expectations. AIs like DALL-E push us to think about these questions and what we mean by these terms. OpenAI is clear about where it stands. “Our aim is to create general intelligence,” says Dhariwal. “Building models like DALL-E 2 that connect vision and language is a crucial step in our larger goal of teaching machines to perceive the world the way humans do, and eventually developing AGI.”
-
Mesh appears glitched with Skin ON. FBX imported asset c4d.
HappyPolygon replied to a topic in Cinema 4D
There are two models. They are both modeled on XZ axis but the rig is on the YZ axis. The second model is fine. Yeah, maybe sharing the original model instead of the c4d file could help more. I've noticed moving the ride01 bone a bit lower will add a bit more of volume. Setting the Length to Uniform Scale in the skin object also puts some more volume there. MIGHT might be right here 'cause the second model is on the same plane as the bones. -
Mesh appears glitched with Skin ON. FBX imported asset c4d.
HappyPolygon replied to a topic in Cinema 4D
As we don't have the file to tinker have a look at this solution and tell us if it helped. -
-
Prove it !
-
Houdini Get the Center "Outline" of a Spline/Text?
HappyPolygon replied to bentraje's topic in Houdini
Just realized the post was under "Houdini" Well now I know it's not an S26 feature. Fingers crossed for R27 -
You both have NDAs with MAXON ? How many are you ? I'm jealous, want one too !
-
Houdini Get the Center "Outline" of a Spline/Text?
HappyPolygon replied to bentraje's topic in Houdini
Where is that node ?