Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/06/2024 in all areas

  1. yea i do agree that other software has a bunch of lousy complainers as well. I'm no evangelist for any software, particularly houdini which I think has many major flaws... I am happy with the work that c4d devs are rolling out, especially the multithreading/parallelism work thats being done. generally maxon so far with what theyve released is in the right direction development wise... my main criticism is just price hikes which are more of a corporate decision... regardless, I am in no position to switch to other software because I dont think they're any better. c4d is still my go to program and preferred workflow.
    1 point
  2. That is not entirely accurate. 3ds max is the worst example you can bring for the 3d app table to discuss, and i know that for a fact, since i used to work with it and my company still uses it to render unfortunatelly. In short, 3ds max is one of the most overpriced apps in the world for studios (big or small) that cannot use indie licenses such as mine. its around 2.5 k almost per year and that is only the software. Then you must start filling it with other plugins to make it usefull, starting with its most popular renderer vray or corona wich is paid separately. Almost no one uses arnold in 3ds max wich is the actuall inbuilt renderer (although its a good one). But it doesnt end here. In studios like mine we also pay for Forest tools (plugin), Tyflow (plugin) and a bunch of other plugins to solve basic stuff like take system and such. In the end the money you spend in 3ds max is a small fortune and c4d is still better for charater animation, motion, and a lot of other stuff. I believe 3ds max is still popular because of its legacy, a large user base and the connection with the arch viz business. But i agree with you when it comes to Maxon policies of making us pay for stuff we dont use. I really dont need nothing else but the c4d app. Heck, i eaven dont need RS altought i like to use it. So the best they can do is to give users the power of choice. In studios like mine c4d is used to as part of a pipeline along with houdini and 3ds max. I dont need an extremely expensive software with things i'll never use. cheers
    1 point
  3. Hi, I'm an old core4d user but I've never posted anything before, I'll introduce myself here and apologize for my bad English. My name is Gianluca (Abetred) I am present on other platforms and I consider myself a problem solver and an experimenter in the fields of 4D cinema, xpresso, thinking particles and python. I have an unhealthy passion for particle systems and related systems. I would like to introduce you to some experimental projects that I have been carrying out for some time to force the use of CUDA acceleration to manage intensive computational calculations, usually entrusted to the CPU through third-party libraries. I would like to point out that the tests refer to pure python code and only accelerate the datasets passed to the cuda function, the python code remains single core and unfortunately has its limits. The first project (test) is the 3d flocking of TP interactive entities and the attached video is not the final result (which has much better performance) but an intermediate point. Flocking simulation parameters: Separation: Boids avoid mutual collisions by maintaining a pre-established minimum distance. Alignment: Boids tend to match the speed and direction of their neighbors. Cohesion: Boids cluster toward their group's center of mass. Boundaries: Boids remain within the simulation domain with natural behavior. Speed Limits: Speed is limited within a preset range for realistic movement. Prey: Boids can track moving targets relative to distance. Obstacles: Boids interact with obstacles in a realistic way. All parameters are adjusted by response factors to recreate a smooth reaction. Further implementations are possible, such as the tracking ray of complex obstacles which consists in predicting the most reliable escape route while avoiding collisions, the direction of travel which maintains an axis of rotation in the direction of acceleration and the arc of vision which limits the interactions only with visible boids within a circle section, for greater realism of the simulation. The second test is developed around a surface of points simulating a moving fluid... Some significant data on the scene: - The "liquid surface" is a mesh composed of 40,400 points, without faces or segments. - The spoon interacts with the surface through a selection of about 600 vertices (but there could be arbitrarily many more without excessively burden on performance). - The VertexMap is generated for visual feedback only; the actual calculation is handled internally by the code. The code is executed in real time by a Python Generator and is responsible for managing the interaction between the two objects, calculating the effect of the waves in the fluid, calculating the positioning of the points. This test is also in the full experimental phase. I point out that in the following video the cuda acceleration refers to the first part of the video, the one indicated as "raw simulation" and that already applying the data to the vertexmap slows down performance because it is managed by the CPU. But let's go further... The revolution is evident not only for the simulation in viewport. The real advantage lies in the construction of the cache (I'll give the example with the alembic export or a trivial writing of the dataset in a file), which is also calculated by the GPU resulting in a real time saver. Thanks for your attention, I will try to keep this topic updated...
    0 points
×
×
  • Create New...