FCC Introduces New ISP Labels – DTH

DTH-6-150x150The FCC introduced new rules requiring easy-to-ready labels on products from ISPs at the point of sale, Google rolls out new local search features, and TikTok begins testing its research API.

MP3

Please SUBSCRIBE HERE.

You can get an ad-free feed of Daily Tech Headlines for $3 a month here.

A special thanks to all our supporters–without you, none of this would be possible.

Big thanks to Dan Lueders for the theme music.

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods, KAPT_Kipper, and PJReese on the subreddit

Send us email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here.

Making Voting Transparent – DTNS 4398

Pew Research Center released survey data on how teens use and perceive social media. The Verge has an article up about The Browser Company’s Arc browser and their productivity approach to browsing. And we delve into a story about University of Florida computer science professor Juan Gilbert who’s developing an unhackable electronic voting machine.

Starring Tom Merritt, Sarah Lane, Justin Robert Young, Roger Chang, Joe, Amos

MP3 Download

Follow us on Twitter Instgram YouTube and Twitch

Please SUBSCRIBE HERE.

Subscribe through Apple Podcasts.

A special thanks to all our supporters–without you, none of this would be possible.

If you are willing to support the show or to give as little as 10 cents a day on Patreon, Thank you!

Become a Patron!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit

Send to email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here!


WhatsApp lanza su directorio de negocios – NTX 252

WhatsApp lanza su directorio de negocios, Amazon despide personal y la mayoría de los juegos de Blizzard se dejarán de ofrecer en China.

MP3


Puedes  SUSCRIBIRTE AQUÍ.

Noticias:
-Amazon anunció el despido del 3% de su personal, o 10,000 trabajadores aproximadamente.
-Nvidia anunció que colaborará con Microsoft en el desarrollo de una plataforma de cómputo en la nube enfocada en Inteligencia Artificial
-La aplicación de notas y gestión de tareas Evernote fue comprada por el desarrollador italiano Bending Spoons en una transacción que se espera se concrete a inicios de 2023
-Blizzard suspenderá la oferta de parte de su catálogo de juegos en China, debido al vencimiento de su acuerdo con su socio NetEase.
-WhatsApp anunció en un comunicado el lanzamiento en forma de su directorio de negocios, disponible en Brasil, Indonesia, México, Colombia y Reino Unido

Análisis: El comercio electrónico en WhatsApp

Puedes apoyar a Noticias de Tecnología Express directamente en este enlace.
Gracias a todos los que nos apoyan. Sin ustedes, nada de esto sería posible.
Muchas gracias a Dan Lueders por la música.

Contáctanos escribiendo a feedback@dailytechnewsshow.com

Show Notes
Para leer las notas del episodio en una ventana aparte, ¡haz click aquí!

#449 – Finding a Leggy Blonde

Veronica forgets what beer she had. Must be a good beer. It was a blonde in a tall can. A leggy blonde perhaps? Also, we have the World Fantasy Award winner AND the SPFBO finalists! Plus our non-spoilery first impressions of Mur Lafferty’s Six Wakes.

Amazon lays off 3% of staff – DTH

DTH-6-150x150Amazon announced a lay off of approximately 10,000 workers, Nvidia and Microsoft team up for a cloud computer focused on AI, and Blizzard will suspend World of Warcraft and other titles in China due to the end of agreement with NetEase.

MP3

Please SUBSCRIBE HERE.

You can get an ad-free feed of Daily Tech Headlines for $3 a month here.

A special thanks to all our supporters–without you, none of this would be possible.

Big thanks to Dan Lueders for the theme music.

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods, KAPT_Kipper, and PJReese on the subreddit

Send us email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here.

About CUDA

KALM-150x150"

Tom explains the need and use case for CUDA software and hardware and why that may matter to you.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to feedback@dailytechnewsshow.com

Episode transcript:

I was shopping for a graphics card and I noticed one had a lot of CUDA cores

But another one didn’t have any CUDA cores at all

What’s a CUDA core? Can I live without it?

Confused? Don’t be

Let’s Help you know a little more about CUDA.

CUDA stands for stood for Compute Unified Device Architecture. These days it’s just referred to as CUDA. Kind of like CBS in the US used to stand for Columbia Broadcasting System but now is just CBS. Or like KFC tried to do for awhile.
CUDA lets software use a GPU, the graphics processor in your computer, like a CPU, the central processor. The approach is sometimes called general-purpose computing on GPUs or GPGPU for short. But mostly in your everyday tech news consumption, you’re going to hear people talk about CUDA and CUDA cores. “This new GPU has 12,000 CUDA cores!” Or something like that.
To oversimplify, it’s a form of parallel processing. Lets you have the computer do a lot of things at once instead of doing them one at a time, which makes everything faster and also lets you do more things. CUDA is Nvidia’s parallel processing platform. And a CUDA core is a processing unit, the hardware inside the GPU for taking advantage of it. AMD has one too, called Stream Processors.
CUDA is actually a software layer that gives access to the GPU’s virtual instruction set. Basically it lets software meant to run on a CPU get some of the same results from the GPU. You could do this with APIs like Direct3D and OpenGL, but you need to be good at graphics programming. CUDA works with languages like C, C++ and Fortran, so if you do parallel programming already, you should be able to take advantage of CUDA.
Nvidia created CUDA and initially released it on June 23, 2007. It’s Nvidia’s proprietary technology.
CUDA works on most standard Operating Systems on all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. Tegra, Jetson, Drive and Clara GPUs tend to get specialized version of CUDA every few point upgrades or so. Nvidia uses the OpenACC standard for parallel processing in CUDA.
GPUs were developed to handle the more intense task of processing graphics, without burdening the CPU. Because CUDA allows you to have a lot more cores than you could have in a CPU. A CPU core has to fetch from memory, interpret instructions and calculate, because it has to do everything. A GPU core doesn’t need to do everything. So it just calculates. Over time this developed into a parallel system that became very efficient at manipulating any large block of data whether it was graphics or not.
Besides graphics, modern algorithms used for all kinds of things need to process large blocks of data in parallel, and can often do that better on a GPU..That include things like machine learning, physics engines, cryptographic hash functions and more. So in gaming the physics engines that model the real world use it, outside of the actual graphics, and in cryptography it’s used to hash things so they’re hard to crack.
CUDA accelerates these functions. Now it’s worth noting that Nvidia has a couple other kinds of cores too. The surprisingly named Ray-tracing cores, specializing in ray-tracing graphics. You can do ray tracing in CUDA cores, but not as well as you can in the specialized RT cores. We won’t go into it here, but Ray tracing is essentially handling how light moves to improve the look of graphics. (I know that’s not REALLY right but it gives people who don’t know, the gist of it.) And then there are Tensor cores. I won’t even try to summarize what they do but they end up helping train neural networks for deep learning algorithms. Do you know the importance of metric multiplication in processing? Great! Then I don’t need to explain what a Tensor core is to you. If you said, no, like me, then knowing that Tensor cores have to do with deep learning and neural network training is probably good enough for now.
Back to CUDA!
While CUDA is a software layer, a CUDA core is the hardware part of the GPU that the software can use.
Think of it like this. You have a room full of folks who have air pumps that can inflate footballs. But you also want them to inflate bike tires. The CUDA software is an adapter for their pumps that let them inflate other things besides footballs. And you have a lot of inflatables, basketballs, bouncy castles, floaty ducks. So you have the adapters, the CUDA platform, and a bunch of rooms full of people with pumps, the CUDA cores– so you can send a bouncy castle into one room which may take everyone in the room all morning to inflate but it won’t stop you from inflating footballs, and floaty ducks, because you have other rooms you can send those into.
No. I don’t know where these metaphors come from but surprisingly I think that one works pretty well. If it didn’t, everyone else usually describes CUDA cores as extra pipes that help drain water faster. Whatever works for you.
In the first Fermi GPU from Nvidia in April 2010, you had 16 stream multiprocessors that each supported 32 cores, for a total of 512 CUDA cores. Remember like I said earlier, a GPU core does not fetch from memory or decode instructions, it just carries out calculations. This is one of the reasons you can have so many more of them on a card, compared to CPU cores. Anyway, being able to just do calculations is cool for 8-bit graphics where you just need to know what pixels go where. CUDA the software layer interprets the instructions and can coordinate all the cores and get the cores to calculate the right things for more advanced graphics uses and non-graphical uses too.
Are more cores better?
Yes. But not for every version of that question. The performance of a graphic card is not going to rest on the number of cores. If you have slow clock speeds or inefficient architecture the number of cores won’t matter much. However if the architecture is the same and the clock speeds are close, the number of CUDA cores can tell you something useful. This comes into play when comparing all the cards in a single generation. Like the NVIDIA 4000-series cards or something. But it won’t work across generations. Tech Centurion points out that the Nvidia RTX 2060 had fewer CUDA cores than the GTX 780. But nobody is out there arguing the 780 was a better card than the 2060.
Part of that is because CUDA cores can be built differently too. For instance the Fermi CUDA cores had a floating point unit and an integer unit. The CUDA cores in the Ampere architecture had two 32-bit floating processing units. It could handle two 32-bit floating point or one 32-bit floating point and one Integer operation every cycle. In other words the CUDA core could do a lot more in the Ampere architecture than it could in Fermi architecture.
So remember that the number of CUDA cores just tells you that more data can be processed in parallel overall than another card with fewer of the same kind of CUDA cores. The clock speed tells you whether a single core can perform faster or not. And the architecture tells you how much each core can do per cycle. And then the software layer can affect things, as can the size of the transistors.
And you really can’t compare AMD’s stream processors to Nvidia’s CUDA cores. They work differently and use different software platforms. They’re similar the way, say an apple and an orange are both fruits and deliver fructose. They do the same things, hydrate you, give you vitamins, and deliver about the same number calories. But they have very different ways of going about it.
This is why people do benchmarks. Just let me know what it actually does in practice. Thanks.
To sum up, CUDA cores help your NVidia GPU do more but the number of them is only helpful as a comparison of cards within the same Nvidia generation of GPUs.
In other words, I hope you know a little more about CUDA.

Libraries Jump Into the Streams – DTNS 4397

Alphabet’s Mineral project has partnership with berry grower Driscoll to monitor the condition of its crops. Motherboard has a story on how local libraries are launching their own local streaming music services. And what’s the latest on the state of chip supplies.

Starring Tom Merritt, Sarah Lane, Roger Chang, Joe, Amos

MP3 Download

Follow us on Twitter Instgram YouTube and Twitch

Please SUBSCRIBE HERE.

Subscribe through Apple Podcasts.

A special thanks to all our supporters–without you, none of this would be possible.

If you are willing to support the show or to give as little as 10 cents a day on Patreon, Thank you!

Become a Patron!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit

Send to email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here!


Galáctica facilita la investigación científica – NTX 251

Google Wallet llega a México, Netflix te ayuda a eliminar gorrones y Meta colabora en el desarrollo de Galáctica

MP3


Puedes  SUSCRIBIRTE AQUÍ.

Noticias:
-Netflix lanzó una nueva configuración para “Administra acceso y dispositivos” con la cual los propietarios de cuentas podrán ver y cerrar sesiones de forma remota en los dispositivos en donde se usa la cuenta.
-Finalmente llega a México Google Wallet. En su lanzamiento, Google estableció alianza con Nu, Hey Banco, Banorte, Banregio, Inbursa, Mastercard y Visa.
-La NASA lanzó su nave espacial Orion. Esta es la primera nave espacial en usar la activación por tiempo vía ethernet, o TTE para tráfico crítico mixto en una red.
-Microsoft lanzó una nueva aplicación de Games for Work en Teams, lo que permite a los compañeros de trabajo jugar Solitario, Buscaminas, Wordament o IceBreakers entre ellos durante una junta.
-Meta AI colaboró con el proyecto comunitario Papers with Code para generar el gran modelo de lenguaje Galáctica, diseñado para almacenar, combinar y razonar sobre contenido científico.

Análisis: ¿Para qué sirve un Gran Modelo de Lenguage?

Puedes apoyar a Noticias de Tecnología Express directamente en este enlace.
Gracias a todos los que nos apoyan. Sin ustedes, nada de esto sería posible.
Muchas gracias a Dan Lueders por la música.

Contáctanos escribiendo a feedback@dailytechnewsshow.com

Show Notes
Para leer las notas del episodio en una ventana aparte, ¡haz click aquí!

Miami Vice (320) – It’s Spoilerin’ Time 431

Next week: Twenty Five Twenty One (103), The White Lotus (204), Rick and Morty (607)

Email the show at Cordkillers@gmail.com
Subscribe, get expanded show notes, and past episodes at Cordkillers.com

Support Cordkillers at Patreon.com/Cordkillers. If we get to 1850 patrons or $1850/episode, we can begin the Spoilerin’ Project and give you show-based Spoilerin’ Time feeds. Find out more and pledge here.

Download audio

The White Lotus (203) – It’s Spoilerin’ Time 431

Next week: Twenty Five Twenty One (103), The White Lotus (204), Rick and Morty (607)

Email the show at Cordkillers@gmail.com
Subscribe, get expanded show notes, and past episodes at Cordkillers.com

Support Cordkillers at Patreon.com/Cordkillers. If we get to 1850 patrons or $1850/episode, we can begin the Spoilerin’ Project and give you show-based Spoilerin’ Time feeds. Find out more and pledge here.

Download audio