Jump to content

All Activity

This stream auto-updates

  1. Today
  2. I'm even more confused now. Wikipedia says this: So Wikpedia also says that Straight Alpha is the one that has the emission independent from the Alpha. Also, apparently Premultiplied Alpha IS multiplied with the Alpha value. It would make sense, since a 50% "covered" 70% green is 35%, because half of the emission is absorbed by the coverage. I think I understand the issue now. The Blog is simply about the BLENDING operation with premultiplied alpha values. It has nothing to do with the images themselves. It is simply easier to do calculations with premultiplied images, since straight alpha images need to be multiplied with their alpha by the renderer at runtime to get premultiplied values needed for blending. Premultiplied simply comes with the multiplication integrated already. I could have thought of this sooner, since a couple of pages before is talking about how Photoshop internally uses straight alpha calculations and thus ends up with wrong colors. Photoshop only blends correctly in 32 bit mode, since it uses linear premultiplied maths then. At least that's how I understood it. tl;dr: Doesn't really matter, just tell your software what you're feeding it with and it should be fine. Also use straight or premulitplied according to what you want to do, like @mash said here: https://www.core4d.com/ipb/forums/topic/119202-premultiplied-vs-straight-alpha-worst-rabbithole-i-have-ever-been-in/?do=findComment&comment=764541
  3. lol I think I've stumbled over this exact blog at some point. That said, I'm not a fan of PNGs either unless I'm rendering a still that just needs some color adjustments. Thing is, all that you guys said is what I thought as well, but that blog seriously confuses me. If this man is to be believed (and I think you can, since he uses tons of credible sources that I've personally double checked) and I did not misunderstand it is exactly the other way around, hence my scepticism towards the naming in C4D. Let me quote some stuff from the blog: ... which would make absolute sense that it would look like a STRAIGHT alpha in the images I posted above and absolutely NOT like the premultiplied images above. ... which also supports the idea that premultiplied is NOT the thing that has the background or anything else baked in, but is rather "lossless" I know that ALL sources, including ChatGPT / Copilot tell me that is NOT the case, but the general confusion around the terms that you can feel everywhere has me totally bewildered. It doesn't help that you can find some information on AE and PS simply not really working in premultiplied math but instead internally doing straight alpha calculations and more or less barely getting the right result. Some of these threads are old though, so I have no idea if this is still the case. I would believe so though, since we're still taking about Adobe. Especially what you posted @HappyPolygon. If I did not TOTALLY misunderstand the blog and if it is truthful, the values that ChatGPT gave you should be EXACTLY the other way around, since premultiplied is supposed to have the color values independently from the Alpha, and straight Alpha is supposed to be the exact opposite. Especially with language models like that and how unreliable they are I could totally see it picking up a wrong narrative that has been widely spread on the internet by people just misunderstanding what "premultiplied" actually means. Because if you think about it, premultiplied sounds EXACTLY like what straight alpha supposedly is. I'm just questioning if maybe, just maybe, almost everybody is using the wrong terms in this case, which wouldn't surprise me in the slightest. Thank you for that super valuable input, that is going into my own little documentation for reference in the future 🙂
  4. From what I understand from all this is that the premultiplied method does not keep a separate Alpha map of the image. This means that it encodes the Color and Alpha channels on the same bitmap, hence the name as the process of embedding the alpha map has already been done before previewing/opening the image. The straight one is like the raw data. You have two separate data structures, the color and alpha separate. It gets multiplied to process/view correctly by multiplying the corresponding values after the fact. With this you don't have to extract the alpha map if you need it as it is already available.
  5. Asked ChatGPT, gave this answer R: 255, G: 0, B: 0, A: 0 R: 0, G: 0, B: 0, A: 0
  6. I think you may have this backwards. In plain language: Straight alpha: Stores the full RGB colour for each pixel, but ignores how transparent it may or may not be. ie the transparency of a pixel has no impact on the colour stored. This means for example if you render a white cloud with a soft whispy edge in a blue sky, the rendered cloud will only contain white cloud colours, the blue of the sky will not be present in the rendered cloud, even where the alpha transparency eats into the cloud. Premultiplied: This simply means the image being rendered has already had the background baked into the rendered colour data. In the cloud example it means that it will start to turn blue as the edge of the cloud becomes more transparent. In practical terms, straight alphas can be great because there's no bleeding of the background into the visual RGB data, you can take your white cloud and throw it onto any background you like, there wont be any blue from the sky creeping in. On the other hand If you place your premultiplied cloud onto an orange sunset background, youll get a blue halo around the cloud, which sucks. However.... It isn't all roses. Sometimes you need the background colour to be baked into the transparent edge because some things are just flat out impossible to render due to the number of layers present or the motion on screen. Here's one which screws me over regularly; what happens if I have a 100% transparent plastic fan blade, but the fan blade is frosted. And in the middle of the fan is a bright red light. Visually the fan blade has the effect of looking like a swooshing darth vader light sabre. Its bright red and burning white from the brightness, but whats there? a 100% transparent object.... The alpha channel with a straight alpha will obliterate my rendering, its 100% transparent plastic. You can see it, but the alpha channel decides to ruin your day and now the rendering is useless. The only option here is a premultiplied alpha where the background is baked into the motion blur and SSS of the plastic fan blade. Sure, I need to make sure my 3d background somewhat matches my intended compositing background, but its the only way to get any sort of useful render. Same goes for motion blur, DOF blur, multiple transparent layers in front of each other (steam behind glass) The honest answer is, use whichever one is least likely to screw you over. If you have lots of annoying transparent/blurry things to deal with, go premultiplied but plan your background ahead of time. If you want clean alphas on a simpler object render, go straight alpha. I haven't read your linked blog all the way through, but I will say... there are an abundance of wrong people loudly proclaiming themselves to be fonts of all knowledge. Theres one in the octane community who insists on inserting himself into literally every thread on the entire octane forum to tell you youre an idiot for using a png file, he has 100's of blog pages which are a strange mix between 3d rendering and flat earth magical woo woo maths to show everyone just how right he is. That said, your rainbow example does match up with what the blog says. The only difference is the blog seems to think the straight alpha is evil and you should only use the premultiplied, whilst I would say both have their uses, with straight being preferable when possible.
  7. So for the last couple of days I've been trying to get really deep into digital color science and all the baggage that comes with it. This is all in preparation for upcoming projects and the desire to understand this topic once and for all, at least the basics. So far everything has been working out, from Input Transforms over ACES to Color Spaces etc. This all changed when I got the good old topic of Alpha (oh god help me please) As far as I understood now, and from a seemingly very knowledgeable source, there are basically two types of color encoding with Alpha: Premultitplied Alpha / Premultiplied Color / Associated Alpha Straight Alpha / Unmultiplied Alpha / Unassociated Alpha Before I start, we have to fundamentally clarify two things, important for terminology: RGB describes the amount of color emitted. Not the brightness, or how strong the color is, just the amount of color that is "emitted" from your screen, for each primary color. Alpha describes how much any given pixel occludes what is below it. tl;dr: RGB = Emission, Alpha = Occlusion Premultiplied Alpha ... probably has the dumbest name ever, because intuitively you'd think something is multiplied here, right? Well, that's WRONG. The formula for blending with Premultiplied Alpha looks like this, where FG is Foreground and BG is Background: FG.EMISSION + ((1.0 - FG.OCCLUSION) * BG.EMISSION) What this comes down to is that premultiplied basically saves the brightness of each color independently from the Alpha, and the Alpha just describes how much of the background this pixel will then cover. This means that you can have a very bright pixel and it's Alpha set to 0, so it will be invisible, but the information will STILL be there even though the Pixel is completely transparent. Blending works like this, where foreground is our "top" layer and background is our "bottom" layer that is being composited onto. Check if the current pixel has some kind of occlusion (Alpha <1) in the foreground Scale the background "brightness" or "emission" by the occlusion value (Alpha) (BG Color * FG Alpha pretty much) Add the emission from the current pixels foreground (BG Color from 2. + FG Color) Straight Alpha ... is considered to be a really dumb idea by industry veterans, and often not even called a real way to "encode color and Alpha". The formula looks like this: FG.EMISSION * FG.OCCLUSION) + ((1.0 - FG.OCCLUSION) * BG.EMISSION) What this means is that Straight Alpha multiplies the pixel emission by the occlusion (Alpha), as opposed to having the final emission of the pixel independently saved from the Alpha. If you've every opened a PNG in Photoshop this is pretty much exactly what Straight Alpha is. There is no Alpha channel if you open a PNG in PS, just a "transparency" baked into your layer. All the pixels that are not 0% transparent are not their true color, as Premultiplied Alpha would describe it. I have not read this terminology anywhere, but personally I would kinda call this a "lossy" form of Alpha, since the true color values are lost and are not independent from the Alpha, unlike Premultiplied Alpha. Why am I telling you all this? Fundamentally I just want to check if I understand this concept, because there is so much conflicting information on the internet it's not even funny. I am so deep in the rabbithole right now that I question if some softwares even use the terminology correctly, and C4D is one of them. You know how C4D has this nice little "Straight Alpha" tick box in the render settings? Well, according to the manual it does this: Am I completely crazy now or is this not EXACTLY what I, and the Blogpost I linked above, describes as Premultiplied Alpha? Because we have RGB and Alpha as separate, independent components? Another example, if you just search for "Straight Alpha" on the internet, you might find this image: This is the same story as above. Doesn't the Straight Alpha example look exactly like Premultiplied Alpha, and the example for Premultiplied Alpha like what Straight Alpha really is? I truly feel like I'm taking crazy pills here, and I hope someone more knowledgable in the whole Color Science / Compositing field with can tell me where the hell I am wrong. Did I misunderstand how these two concepts will actually look like in practice, did I miss some important detail, or is there just so much misinformation about this topic EVERYWHERE? If you've made it to here, thank you for listening to my ramblings. I hope I can be enlightened, otherwise this is going to keep me occupied for forever...
  8. Yesterday
  9. Thanks. I hadn't seen anything from GDC or NAB so I was concerned Maxon was scaling back on its event coverage. I look forward to seeing these when posted.
  10. Last week
  11. Hey Jeff - Maxon is recording the NAB presentations and plans to post them 1-2 weeks after the show.
  12. NAB is another event Maxon advertises guest speakers but doesn't stream or show the recorded live event afterward. Instead they put up 30 second videos on IG with goofy music for each presenter. Maxon was always a leader in live streaming from the events they went to. What happened Maxon?
  13. The Classic - Carpet roll! 120_Carpet_Roll(MG+XP).c4d
  14. Lava lamp with scene file, attachment is missing in one of the posts above : ) 18_Lava_Lamp.c4d
  15. Pm request - point in polygon. This is the simplest way to detect if point is inside any polygon. Test location must intersect with polygon on all axes for result to be true 247_Point_in_polygon.c4d
  16. Welcome back, nice to see you here again 🙂
  17. Hi Hello from Finland. 🙂 I am back to forums and was registered at the old c4dcafe back in 2005. Not been able to take part of community for long time since I started to study, work and take care of 2 kids. Still using Cinema 4D but also learned Blender too. Currently studying information technology also as oldtimer geek. We learn Linux and servers etc.
  18. Mash

    Stuck In A Loop

    If I were in this situation, I would disconnect my machine from the internet (kill wifi, pull ethernet), because then theres a good chance it will simply timeout after 30 seconds.
  19. @Mash. Ha. I'm picturing two people needing to complete this sequence. Like launching a missile. "On my count, hit Shift on your keyboard and I'll do the same. I will use a pencil duct-taped to my head to press Delete." Seems like the ESC feature should kill ALL searching, or at least give you the option to stop searching this directory or all searches. Right now it just kills the current folder it is searching. In my case the other day, the Google Drives that are shared with me might be in the 20-30 range, each with their own mass of folders. I gave it a couple hours, but hitting ESC like a woodpecker, but just too much.
  20. I haven't watched the series yet myself, but that title sequence is wonderful. And the music is as harmonically interesting as the visuals and techniques are to us CG guys. Perfect alternating consonance and dissonance. For those of you who have an interest in such things, here's Charles Cornell to explain why that's also great ! CBR
  21. to the forums, we are glad to have you onboard!
  22. Original title: Severance TV Series, 2022–, TV-MA, 50m Director: Ben Stiller Creator and writer: Dan Erickson Stars: Adam Scott, Britt Lower, Zach Cherry Genre: Drama, Mystery, Sci-Fi, Thriller Plot: Mark leads a team of office workers whose memories have been surgically divided between their work and personal lives. When a mysterious colleague appears outside of work, it begins a journey to discover the truth about their jobs. Fun Facts: Due to a shortage of soundstage space, production designer Jeremy Hindle had to create different areas of the severed floor by rearranging the hallway sets and using VFX to lengthen them. The opening running sequence of season 2 was accomplished with a mixture of elements: a man running with a camera with Adam Scott, a treadmill, VFX to extend hallways/merge shots and a robotic camera rig that moved so quickly around Scott's head that he had to rehearse his movements very specifically to avoid potential injury. Intro created by Oliver Latta Software used: Houdini SideFX Houdini Maxon Cinema 4D Pixologic Zbrush Nuke
  23. Golaem 9.2 The main change is that you no longer need a license to use the feature set from Golaem Lite, the old lower-priced edition of the software for layout artists. Users with a subscription to the Media and Entertainment Collection can now use the Golaem Lite features on as many machines as they want. In addition, the old watermarked Personal Learning Edition has been replaced with a standard 30-day Autodesk trial version. Golaem 9.2 is compatible with Maya 2022+ on Windows 10+ and CentOS, RHEL, AlmaLinux and Rocky Linux 8. Golaem is available as part of Autodesk’s Media & Entertainment product bundle, which is available rental-only. Subscriptions cost $335/month or $2,700/year. https://www.autodesk.com/collections/media-entertainment/overview https://help.autodesk.com/view/GOLM/ENU/?guid=glm_golaem-92-20250326 Maya 2026 and Maya Creative 2026 Unlike Autodesk’s recent annual updates to Maya, it does not introduce any complete new toolsets, but there are updates throughout the existing core functionality, including 3D modeling, retopology, shading, animation and simulation. There are also updates to Maya’s integration plugin for Autodesk’s Arnold renderer, and a new Animate in Context feature for users of Autodesk’s Flow Production Tracking platform. Autodesk has also released Maya Creative 2026, the corresponding update to the cut-down edition of Maya for smaller studios. For asset development, Maya’s Boolean node gets a new Volume mode. Unlike the existing Mesh mode, the source objects are converted to volumes before computing the Boolean operation, then the output is converted back to polygonal geometry. Autodesk pitches it as a quick way to block out organic models like creatures and characters. Other changes to the modeling toolset include the option to set scale units when importing or exporting models in STL format for 3D printing. For texturing, the main change in Maya 2026 is that OpenPBR is now the default surface shader. Support for the open material standard, intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, was originally added in Maya 2025.3. In addition, LookdevX, Maya’s plugin for creating USD shading graphs, has been updated. LookdevX for Maya 1.7 features a number of workflow improvements, particularly to publishing, and support for relative file paths when exporting MaterialX documents. Other changes include an experimental new generative textures API, making it easier for TDs to integrate third-party generative AI services into LookdevX by creating C++ or Python plugins. Maya’s Substance plugin, for editing procedural materials in Substance format inside Maya, has also been updated, although there aren’t many details about what’s new in Substance 3.04. Maya 2026 also features performance and workflow improvements to the new ML Deformer. Introduced in Maya 2025.2, it uses machine learning to create a fast approximation of complex deformations, enabling users to take characters with complex, slow-to-process rigs and train the deformer to represent the deformations they generate in the character mesh. In Maya 2026, it is possible to visualize the difference between the source and target meshes as a heat map to help troubleshoot output, using a new display option, Apply Mesh Compare. The update also makes the training process more customizable, makes output less noisy, and improves performance: load times are “40 times faster”, and disk space usage is “80% reduced”. Other changes relevant to animators include the option to export Playblasts in .webm format. Bifrost for Maya, Maya’s node-based framework for building effects, gets a significant update in Maya 2026, with Bifrost for Maya 2.13 adding a new FLIP solver for liquid simulation. It was already possible to use it to simulate liquids by meshing the output of the MPM solver, but the workflow was better suited to granular and viscous fluids, whereas the FLIP solver is better suited to large-scale water simulations. The new FLIP solver is described as “largely similar” to the older Bifrost Fluids plugin, but shares the same framework as other simulation types, like smoke, fire and granular materials; and the node graph provides more flexibility than Bifrost Fluids’ menu-driven interface. Bifrost Fluids is still included with the software as the implementation in Bifrost for Maya lacks some of functionality from BOSS, Bifrost Fluids’ ocean surface toolset. Other changes in Bifrost for Maya 2.13 include updates to its work-in-progress procedural character rigging system, updates to texture baking, and 20 new node types. Maya 2026 also ships with an updated version of the integration plugin for Autodesk’s Arnold renderer, with MtoA 5.5.0 introducing support for the Arnold 7.4.0.0 core. Key changes since the release of Maya 2025.3 include a new transmission_shadow_density parameter in the OpenPBR Surface and the Standard Surface shaders, to control the look of shadows cast by transparent objects, as shown in the image above. Global Light Sampling (GLS) now takes material glossiness into account, which “greatly enhances” render quality, especially in scenes with many small lights. Arnold’s support for ID matte-generation system Cryptomatte has also been improved, with a new internal implementation adding GPU support, and improving performance on CPU. Other changes include improvements to MaterialX and USD support, to the implementation of OpenPBR, and to the MtoA plugin itself and the Arnold RenderView in Maya. However, Arnold 7.4 is a compatibility-breaking update, so shaders, procedurals, and other plugins compiled against older versions of Arnold will need to be recompiled. USD for Maya, Maya’s Universal Scene Description plugin, has also been updated. USD for Maya 0.31 improves lighting workflows, adding support for light linking, and the option to control lighting by looking through a selected light source. Other changes include support for USD cameras in Render Sequence, the option to search for USD prims in the Outliner, and to add or remove USD schemas in the Attribute Editor. Maya 2026 also integrates more closely with Flow Production Tracking, Autodesk’s production-management platform, previously known as ShotGrid. Originally announced last year, the new Animate in Context feature makes it possible to view shots surrounding the active scene directly in Maya. Animators and other shot-based artists can now scrub between their own work and that of other artists, helping to ensure that the changes they make preserve the continuity of the edit. The feature is available on Windows and Linux only, and is still officially in beta. Autodesk has also released Maya Creative 2026, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis. It includes most of the new features from Maya 2026, with the exception of the updates to Bifrost for Maya. In related news, Golaem, the Maya crowd-simulation plugin that Autodesk acquired last year, is now commercially available again. Autodesk includes it in the release notes for Maya 2026, but it’s a separate purchase, and requires a subscription to the full Media and Entertainment Collection, not Maya alone. You can find more details in our story on Golaem 9.2, the current release. Maya 2026 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+. The software is rental-only. Subscriptions cost $245/month or $1,945/year, up $10/month or $70/year since the previous update. In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year, qualify for Maya Indie subscriptions, now priced at $320/year, up $15/year. Maya Creative is available pay-as-you-go, with prices starting at $3/day, and a minimum spend of $300/year. https://www.autodesk.com/customer-value/me/maya https://help.autodesk.com/view/MAYAUL/2026/ENU/?guid=GUID-BAF59B47-E24F-4F87-9B77-ABE78D3F8268 3ds Max 2026 In the 3D modeling tools, the Vertex Weld modifier has been updated to support Spline objects as well as Mesh objects. Using Vertex Weld to close a spline means that it is still possible to apply further modifiers to the spline, including Extrude, Bevel, or Bevel Profile, and “still have a successful result”. There is also a new three-point method for creating spline Rectangle objects, which enables the object to be created at angles other than world-aligned right angles. 3ds Max 2026 features updates to the software’s retopology plugins, including the new Flow Retopology plugin introduced with 3ds Max 2025.2 last July. It mimics the functionality of the existing Retopology Tools, but makes it possible to submit retopology jobs to run in the cloud, rather than on the user’s machine. The current version, Flow Retopology for 3ds Max 1.2, is now bundled with 3ds Max 2026, and the usage limit has been raised from 30 to 50 cloud retopology jobs per month. Version 1.6 of the Retopology Tools plugin itself updates the Mesh Cleaner Modifier, and reduces processing time when using the ReForm algorithm. For texturing, the main change in 3ds Max 2026 is that OpenPBR is now the default material. Support for the open material standard, intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, was originally added in 3ds Max 2025.3. There are also three new Open Shading Language (OSL) maps, including the Perlage map (above), which creates grooved circular patterns like those found inside antique watches. The Substance plugin, for editing procedural materials in Substance format inside 3ds Max, has also been updated, with Substance 3.0.5 making it possible to import materials from the online Substance 3D Assets library directly into 3ds Max’s Slate Material Editor. For animators, the changes in 3ds Max 2026 are very similar to those in 3ds Max 2025.3: more bugfixes to the CAT and Biped toolsets. There are also more performance updates to key modifiers, including the Array, Conform, Displace and Skin Modifiers. 3ds Max Fluids have been updated for a processing speed improvement of “up to 10%”. Other general quality-of-life improvements include a new Preserve Stack Position toggle to return to the modifier previously selected for an object before selecting a new object. The Create section of the Command Panel gets a new Object Search widget. 3ds Max 2026 also ships with an updated version of the integration plugin for Autodesk’s Arnold renderer, with MAXtoA 5.8.0 introducing support for the Arnold 7.4.0.0 core. Key changes since the release of 3ds Max 2025.3 include a new transmission_shadow_density parameter in the OpenPBR Surface and the Standard Surface shaders, to control the look of shadows cast by transparent objects, as shown in the image above. Global Light Sampling (GLS) now takes material glossiness into account, which “greatly enhances” render quality, especially in scenes with many small lights. Arnold’s support for ID matte-generation system Cryptomatte has also been improved, with a new internal implementation adding GPU support, and improving performance on CPU. Other changes include improvements to MaterialX and USD support, to the implementation of OpenPBR, and to the MAXtoA plugin itself and the Arnold RenderView. However, Arnold 7.4 is a compatibility-breaking update, so shaders, procedurals, and other plugins compiled against older versions of Arnold will need to be recompiled. 3ds Max’s Universal Scene Description plugin has also been updated, with USD for Maya 0.10 adding a new Layer Editor for managing USD layers, and support for light linking. The plugin is also now bundled with 3ds Max, rather than being a separate download. 3ds Max 2026 is compatible with Windows 10+. It is rental-only. Subscriptions cost $235/month, up $10/month since the previous release, or $1,945/year, up $70/year. In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year qualify for Indie subscriptions, which now cost $320/year, up $15/year. https://makeanything.autodesk.com/3dsmax https://help.autodesk.com/view/3DSMAX/2026/ENU/?guid=GUID-5E7810B6-59DF-434E-AB24-BC2A452C715C MotionBuilder 2026 MotionBuilder 2026 extends USD-based workflows, USD support having been introduced last year in MotionBuilder 2025. When loading a USD stage file, the animation contained in the file is now retained, and will be rendered. A new animatable frame offset property in the Python API makes it possible to retime or time warp the motion. There is also a new button in the UI to reload USD stage proxies, and a corresponding property in the API. Workflow improvements include a new shortcut key to force-select previously unselectable objects, and support for extended function keys – [F13] and beyond – as keyboard modifiers. There is also a new SDK property to control whether HUDs are exported in FBX files, and some API properties have been renamed for clarity. MotionBuilder 2026 is available for Windows 10+ and RHEL and Rocky Linux 8.10/9.3/9.5. It is available rental-only, with subscriptions costing $2,225/year, up $80/year since the previous release. https://help.autodesk.com/view/MOBPRO/2026/ENU/?guid=Whats-New-MotionBuilder-2026 Photoshop 26.5 Photoshop 26.5 is a minor workflow update, redesigning the Adjustment Presets layout. Changes include a new tabbed layout, new icons, and the option to organize presets into groups, including by dragging and dropping. Photoshop 26.5 is compatible with Windows 10+ and macOS 12.0+. In the online documentation, it is also referred to as the March 2025 release or Photoshop 2025.4. The software is rental-only, with Photography subscription plans that include access to Photoshop now starting at $239.88/year. Single-app Photoshop subscriptions cost $34.49/month or $263.88/year https://helpx.adobe.com/photoshop/using/whats-new/2025-4.html 3DGS Render 3.0 Based on research published in 2023, 3D Gaussian Splatting is a new 3D scanning method. Like photogrammetry, it begins by generating a point cloud of a 3D object or scene from a set of source photos, but rather than converting the point cloud to a textured mesh, it converts it to gaussians, using machine learning to determined the correct color for each. The result is a high-quality and potentially fast-rendering 3D representation of the object or scene being scanned. An increasing number of 3D scanning tools now support 3D Gaussian Splatting, one obvious example being Kiri Innovations’ own Kiri Engine. However, native support for 3DGS data in CG applications is currently limited, although it has recently been implemented in both V-Ray and D5 Render. 3DGS Render is intended to address some of the “pain points” with existing workflows, making it possible to manipulate 3DGS scan data inside Blender like a standard 3D object. The add-on converts 3DGS data imported on PLY format into an object that can be transformed, scaled, rotated or duplicated like conventional geometry. The scan can then be lit and rendered inside Blender, although currently only using Eevee, Blender’s real-time renderer, not Cycles, the main production renderer. 3DGS Render 3.0 makes it possible to retexture 3DGS scans by painting onto them directly, as well as by editing shader properties. You can see the workflow briefly in the video at the top of the story: as well as painting solid colors, it it is possible to paint through an image texture. 3DGS Render also now automatically generates UV maps when importing 3DGS data, making it easier to texture using standard Blender workflows. The update also lets you convert standard polygonal meshes to 3DGS – the reverse of what most people will currently be doing, we imagine, but it may open up new workflows in future. In addition, it is now possible to export face edits and transformations, like changes of scale and rotation, made inside Blender when exporting 3DGS data to other DCC applications. There is also an experimental new feature for baking modifier effects to improve performance, the option to apply animated effects to 3DGS data having been added in 3DGS Render 2.0. 3DGS Render 3.0 is compatible with Blender 4.2+. It’s a free download. The source code is available under an Apache 2.0 license. https://rdk2hgu4np.feishu.cn/docx/H07Nd01YQoVdLFxCftDclzo9nTh https://github.com/Kiri-Innovation/3dgs-render-blender-addon Arnold 7.4.1 Arnold 7.4.1 makes it possible to use the toon shading features – the Toon shader itself, and the Contour filter – when rendering on the GPU, as well as on CPU. The GPU implementation is currently limited to direct lighting only, so it does not support reflections, refractions, or indirect lighting, in addition to the standard limitations on CPU. The update also improves render performance, particularly in complex scenes with a lot of procedural instancing. The new options.procedural_instancing_optimization flag “usually produces speedups between 1-2x, but for sufficiently complex scenes the speedup can be significantly larger”. According to the release notes, Activision’s new Caldera USD data set (shown above), which is based on a map from Call of Duty: Warzone, renders a full 18x faster. Parallel scene initiation also now scales better with CPU cores, reducing time to first pixel on many-core machines. In addition, rendering photometric (IES) lights and mesh lights using Global Light Sampling now generates higher-quality results for the same render settings and render times. In the comparison images in the release notes, the difference is particularly striking on a motion-blurred mesh. When rendering USD data, Arnold now uses Hydra by default to handle the translation from USD, which makes the resulting render consistent with its Hydra render delegate. Arnold 7.4.1 also introduces a new HTML-based interactive viewer for render statistics. It displays render stats – such as frame render time, render time by category or by node, memory usage, and texture usage – in a visual format, making it easier to troubleshoot performance bottlenecks. All of Arnold’s integration plugins have been updated to support the new features: 3ds Max: MAXtoA 5.8.1 Cinema 4D: C4DtoA 4.8.1 Houdini: HtoA 6.4.1 Katana: KtoA 4.4.1 Maya: MtoA 5.5.1 If you’re using 3ds Max 2026 or Maya 2026, both of which were also released this week, be aware that these aren’t the versions of MAXtoA and MtoA included with them, which support Arnold 7.4.0, the previous version of the renderer. MtoA 5.5.1 also adds support for OpenVDB point data in the aiVolume node, enabling Maya users to import VDB files that contain point data and render them as Arnold points primitives. Arnold 7.4.1 is available for Windows 10+, RHEL/CentOS 7+ Linux and macOS 10.13+. Integrations are available for 3ds Max, Cinema 4D, Houdini, Katana and Maya. GPU rendering is supported on Windows and Linux only, and needs a compatible NVIDIA GPU. The software is rental-only, with single-user subscriptions costing $55/month, up $5/month since the start of 2025, or $415/year, up $15/year. https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_core_7410_html Mudbox 2026 The release notes read, in full: “Welcome to Mudbox 2026. This release includes minor updates.” Although Autodesk did previously put out more substantial releases, Mudbox 2026 is now the fifth consecutive annual update with no new features listed in the release notes. The firm said that it was “committed to the development of all [of its] tools, including Mudbox” but that it could not comment on future releases. However, while Autodesk raised the prices of most of its Media & Entertainment products in January, the price of Mudbox actually dropped, at least for a monthly subscription. The monthly rental cost fell from $15/month to $10/month, although the price of annual subscriptions remains unchanged. Mudbox 2026 is available for Windows 10+, RHEL/Rocky Linux 8.10/9.3/9.5, and macOS 13.0+. The software is rental-only, with subscriptions costing $10/month or $100/year. https://help.autodesk.com/view/MBXPRO/ENU/?guid=Mudbox_ReleaseNotes_release_notes2026_html ZBrush 2025.2 The main changes in ZBrush 2025.2 are to ZModeler, its polygonal modeling system for creating base meshes and hard surface models. The update adds snapping options when inserting edge loops, making it possible to snap the new loop to the centers of the polygons into which it is being inserted, or to user-defined points. There is also a new Selection Action Mode for “quick polygon selection”, and the option to store commonly used features as presets. The new functionality was developed while integrating ZModeler into ZBrush for iPad 2025.3, the new iPad edition of the software, included in ZBrush subscriptions. Users of Redshift, Maxon’s production renderer, the CPU-only version of which is now available with ZBrush subscriptions, get “18+” new materials. The Redshift material attributes have been updated to match the standard Redshift materials available in other DCC applications compatible with Redshift. ZBrush 2025.2 is compatible with Windows 10+ and macOS 11.5+. It is rental-only. ZBrush subscriptions cost $49/month or $399/year. https://support.maxon.net/hc/en-us/articles/15780513877788-ZBrush-2025-2-0-Release-Notes
  24. That's an old plugin... I don't know why he made those videos now. The effect is still possible with the cloner. The plugin offers some flexibility and ease but it's still possible to do that without it. In his demonstration there's a Mix and Match plugin in the Extensions tab but I couldn't find it in his website, maybe it's something he's still working on
  25. Hi I have had long break from forums since I became dad for two kids. Yes am old user of Cinema 4D and but liked Modo too. Still you can get 10 years licence for Modo + Octane render. (I was in the forum with names "kraphik3d" and "ahven" and very old user since c4dcafe) Please have a look for this link: https://www.daz3d.com/forums/discussion/709536/modo-3d-discontinued-free-to-download-ten-year-offline-license#latest
  26. Perfectly reasonable price for what it does tbh. If you need to make a minecraft world environment for a job then its an absolute bargain; and with the release of the film its probably a pretty good time to release it.
  27. Earlier
  1. Load more activity
Ă—
Ă—
  • Create New...