Jump to content

Havealot

Maxon
  • Posts

    346
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Havealot

  1. Noseman did an introduction tutorial series on Cineversity: https://cineversity.maxon.net/en/series/particles-overview
  2. @pilF You are opening 2 brackets but only close 1.
  3. @TristanV Make sure all parent objects (nulls, sds,...) are centered in the rotational axis of your bottle. Otherwise a rotation will look like a positional change.
  4. There appears to be a cache missing in the scene that is supposed to be linked in the Volume Group object. Can you provide that?
  5. @tastebland You can drive the color of your clones with an effector or with fields and then transfer that to a vertex color tag on the meta ball generator (at least in more recent versions of Cinema 4D). I would try to to it with a volume builder & volume mesher instead of the meta ball though. Much easier to smooth surfaces with it.
  6. @Pistol Pete Try turning off Linear Workflow Shading in the options menu of the viewport. If that solves it, it's a known issue that will be fixed in the next release.
  7. @kweso At the moment what you are trying to do is not possible in a straight forward way. For the fade color you can use the Math Modifier set to multiply and use a value close to 1 (Like 0.99/0.99/0.99). That will make your colors fade to black over time but it is a bit hard to control how fast and it is not linear.
  8. @Jonas Dahl Q: Can I store my cached simulations and switch between different caches? A: In 2024.4 you can now cache all simulations as external files (alembic). You don't get the nice list UI to conveniently switch between caches, but there is a parameter called "path to Alembic File" which you can change to different versions. Q: Can I keyframe / move positions of several caches in the Timeline after caching them? A: You can offset the whole animation using the Offset parameter in the cache tab. If you need it in the timeline you can turn off "Use Animation" and create keys on the frame parameter. That allows you to both offset and retime your cache. For simple time remapping there is also a spline UI. Q: How can achieve caching only a few frames instead of the whole timeline? A: There is an "enable" parameter in the basic tab of each simulation tag. You can use that to control on which frame the simulation starts and what is being cached. If you want the simulation to start at an earlier point in time but only cache a subset, I think the only way is to use the alembic exporter directly to set a specific start and end frame. Q: If I disable an object with a simulation tag in the viewport and render, the simulation is still being calculated, right? A: I believe that's a bug.
  9. @bentraje They are separate systems. The new simulations are all designed to run on your GPU. What you can't to is influence the simulation with scene nodes while it is running. However, you can use Scene Nodes to create your own customer emitter for example. If you generate points in a Node Mesh and use that in a Mesh Emitter, you can create your own initial positions and properties based on a node graph. This works especially well for one shot emission. The other thing you can do (at least in theory, I think there is a refresh issues at the moment), is to read particle simulation data in Scene Nodes and do something interesting with it. You could for example build your own custom tracer.
  10. @Leo D Looks like a priority bug where the cloner is overriding the rigid body positions with the particle positions (which it shouldn't in this setup). You can use instance mode to work around it, as Keppn suggested. But of course that is far less efficient. Or, you rely only on particle in your simulation and simply clone on the result without using rigid body. The new particle system doesn't have particle-particle collision yet but I assume the example in the help was done using the flock modifier to fake collisions (only keep the separation and turn of cohesion and alignment).
  11. You can add a Vertex Color tag to the connect and drag in the Cloner as a MoGraph Field Layer. It's not the most efficient way and it only works if the matrix of each clone also happens to be the the closest clone matrix for each mesh point. But if your clones are not overlapping and a bit separated, then this can work without baking.
  12. It is true that you can't drive the creation of particles with map, shader or noise yet. But there are two workarounds. You can either drive a polygon selection with map/shaders/noise and use that to restrict the emission, or you can emit evenly and instantly kill particles based on a map/shader/noise. Here is an example. The shader field drives a vertex color tag which is then used to set the particle colors on emission. Particles are emitted into a first group where there color is checked. If the it is below a certain value the particle is killed right away, if not it is moved to a permanent group where it will happily continue it's particle life. In case you wondering where the animation is coming from, that's set in the Shader Field with the remapping graph to cycle through different grey values of the texture. This is not the most efficient way as the buffers need to be cleaned up more frequently than needed, but you can achieve the intended effect. kill_by_map.c4d
  13. Try implementing it. And make sure it runs on CPU, GPU (not just on NVIDIA), ARM processors, all operating system, correctly deals with ACEs,...
  14. That can be done with a shader field in the vertex color tag. No need for Scene Nodes. But: You can do some cool scene node setups when it comes to scattering initial spawn locations for the particles and crafting your own velocities and such.
  15. That's fair. Autodesk just released Maya and Max 2025 in March 2024 😄
  16. @Johnny Maunason For this particular use case the Fracture object isn't the right solution. Just group your objects along with the effector (with deformation set to object) under a null and it should scale relative to the individual axes.
  17. @Jaee Is your setup correct? I assume what you are trying to do is to use the simluted (low res) version of a mesh to deform a high res net. But your deformer is a child of the wrong object.
  18. Looks like there is an issue with the triangle mesh option in the collider tag. Use convex hull or box. And if you can try to keep the "thickness" higher. It is used in the collision calculation and larger values make it more robust. Between rigid body objects this won't lead to a gap. In only adds to the gaps between rigid body objects and collider objects. And: Avoid initial setups with intersections of collision shapes. If you have a stack of stuff with all objects sitting flush on top of each other and the bottom one intersects with the floor collider, the fist execution will have a collision resolution that propagates through the whole stack. You can start we a relaxed state with a bit of a gap and let them fall into the stack via the simulation. Once they have come to a stable rest you can set that as the new initial state for the simulation.
  19. @LLS It is just a loop in which you can update variables in each iteration. In python it would look somthing like this: # Initialize your vectors and array. This is the equivalent of creating the initial ports on the outside of the Loop Carried Value and feeding them with initial values. start = c4d.Vector(1, 2, 3) end = c4d.Vector(4, 5, 6) hit_points = [] # Specify the range for your loop. In Scene Nodes the loop range is defined by the Range Node outside of the Loop Carried Value. In this setup it controls how many bounces are being created. for i in range(10): # This is the inside of the loop where you do your thing and then update your variables based on some logic. This will happen in each iteration. start = end end = start + c4d.Vector(i, i, i) # If you need all of the inbetween results later in the node graph you can store them in an Array by appending them in every iteration. array_of_vectors.append(end) #Now the range / loop is done and we are leaving it's scope. If you access the variables now you'll get the state of the last iteration for start and end and an array of all the iteration states for hit_points. Hope that helps.
  20. @HappyPolygon That is GLSL. I have managed porting a shader from GLSL to OSL in the past. But I am not sure if this one will work.
  21. @LLS One benefit you'd get in Houdini for this type of scenario would be direct support of volumes. Scene Nodes doesn't have volume nodes so you'll have work with a combination of classic C4D and Scene Nodes. Regarding multi instance: Of course Houdini has instancing. Regarding volume builder: I've never done any direct comparisons but I would assume it is pretty much the same. In many cases Houdini isn't much faster than other tools. But since there is automatic caching and a nice workflow to work with disk caches you often get better perceived performance over all.
  22. @hyyde This was just meant as a workaround. It's a known issue and will be fixed in the next release. Please still try if using default layouts gets rid of the crashes on your end. And unless you haven't done so, please also send reports for crashes that occur without plugins.
  23. @hyyde There is an issue related to the use of custom layouts. Are you using any of those? If it's an option in your case, maybe you can use the standard layouts until a fix is released.
×
×
  • Create New...