{"id":4777,"date":"2025-04-08T17:18:16","date_gmt":"2025-04-08T21:18:16","guid":{"rendered":"https:\/\/blog.daed.com\/?p=4777"},"modified":"2025-04-09T11:30:07","modified_gmt":"2025-04-09T15:30:07","slug":"low-cost-reasoning-ai-models-disrupt-nvidia-and-openai","status":"publish","type":"post","link":"https:\/\/blog.daed.com\/?p=4777","title":{"rendered":"Reasoning Models and the DeepSeek Panic"},"content":{"rendered":"<p>In 2024, AI companies like NVIDIA and OpenAI experienced rapid growth. In particular, many investors believed that demand for high-end Graphical Processing Units (GPUs), a special kind of hardware well-suited to developing and running AI models, would continue to grow rapidly. Recent AI models have required bigger GPU clusters in order to begin to &#8220;reason&#8221; more like the human brain. Companies are training and releasing new innovative LLMs with increasing frequency, such as Meta&#8217;s Llama series of models, the first widely popular open source LLMs, and Mistal AI&#8217;s Mixtral, the first open source Mixture-of-Experts model. In spite of these innovations, in early 2025, we saw fears of change drive down the values of companies like NVIDIA. Are these \u201creasoning\u201d models actually the next step towards truly intelligent AI? And why did the release of a low-cost AI model from the Chinese company DeepSeek upset investors?<\/p>\n<h3>Thinking about AI<\/h3>\n<p>In September 2024, OpenAI <a href=\"https:\/\/openai.com\/index\/introducing-openai-o1-preview\/\">announced<\/a> its new o1 series of models. OpenAI claimed that these models, dubbed \u201cReasoning Models\u201d, allowed LLMs to think through logical problems like humans do. The actual details of how the o1 model series works have not been disclosed, but OpenAI did explain that it uses a process called \u201cChain-of-Thought.\u201d Although the models are architecturally similar to previous GPT generations, o1 executes multiple \u201cturns\u201d of inference. What this effectively means is that o1 will generate a response and feed that response back into itself for critique. o1 can even generate multiple different responses, rank their correctness, and choose the best one. It is also trained to break problems down into smaller, more manageable steps and produce separate outputs for each step. The result of all this extra &#8220;reasoning&#8221; is one final and concise response. While the models still aren\u2019t \u201cthinking\u201d in the traditional sense, they\u2019re able to reason about problems by doing more processing to ensure correctness. If the computer infrastructure could be scaled to match the greater demands of o1, correctness would theoretically improve by 100 times over GPT-4 .<\/p>\n<figure id=\"attachment_4798\" aria-describedby=\"caption-attachment-4798\" style=\"width: 800px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-4798\" src=\"http:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/nana-dua-A1blvxJxGU0-unsplash1-1024x683.jpg\" alt=\"A photo of two NVIDIA GPUs\" width=\"800\" height=\"534\" srcset=\"https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/nana-dua-A1blvxJxGU0-unsplash1-1024x683.jpg 1024w, https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/nana-dua-A1blvxJxGU0-unsplash1-300x200.jpg 300w, https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/nana-dua-A1blvxJxGU0-unsplash1-768x512.jpg 768w, https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/nana-dua-A1blvxJxGU0-unsplash1-1536x1024.jpg 1536w, https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/nana-dua-A1blvxJxGU0-unsplash1.jpg 1920w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><figcaption id=\"caption-attachment-4798\" class=\"wp-caption-text\">Two NVIDIA GPUs &#8211; Photo by <a href=\"https:\/\/unsplash.com\/@nanadua11?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash\">Nana Dua<\/a> on <a href=\"https:\/\/unsplash.com\/photos\/a-close-up-of-a-graphics-card-on-a-table-A1blvxJxGU0?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash\">Unsplash<\/a><\/figcaption><\/figure>\n<p>Which brings up the elephant in the room. Running LLMs and generative AI models is already extremely expensive because it requires a massive number of the most powerful GPUs. NVIDIA, the leading manufacturer of GPUs, saw its stock price boom in 2024 due to the increased demand. Increasing the compute time necessary for producing LLM responses also dramatically <a href=\"https:\/\/www.cnbc.com\/2025\/02\/26\/nvidia-ceo-huang-says-next-generation-ai-will-need-more-compute.html\">increases the need<\/a> for compute. Investors in NVIDIA and other companies supporting heavier AI methods saw this as an opportunity and an advantage, causing valuations to continue to increase. As the first company to publicly release a model like o1, OpenAI held its dominant position, and could charge more without fundamentally changing the underlying model architecture. Companies like Anthropic, xAI, and Google all followed suit, releasing their own reasoning models within a matter of weeks. It seemed like OpenAI and other companies developing and supporting high-compute AI models were going to continue to benefit from LLM growth.<\/p>\n<h3>A New Challenger<\/h3>\n<p>Then, in January of 2025, the Chinese company DeepSeek <a href=\"https:\/\/www.reuters.com\/technology\/artificial-intelligence\/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27\/\">launched<\/a> an LLM that they claimed was created cheaply and quickly<span style=\"font-weight: 400;\">\u2014<\/span>with comparable performance to o1.<\/p>\n<p>DeepSeek previously joined the open-source LLM space with non-reasoning models like DeepSeek Coder. While these LLMs were not revolutionary, they helped hobbyists and smaller companies run LLMs locally and fine-tune them cost-effectively. To improve on o1, DeepSeek\u2019s researchers prioritized data quality over quantity.<br \/>\n<figure id=\"attachment_4814\" aria-describedby=\"caption-attachment-4814\" style=\"width: 800px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-4814\" src=\"http:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/GPTcrop.png\" alt=\"A screenshot of DeepSeek R1 responding to the prompt, \"Come up with a solution to the ABC Conjecture.\" The model \"thinks\" for 685 seconds by writing a verbose inner monologue to reason about the problem.\" width=\"1200\" height=\"675\" srcset=\"https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/GPTcrop.png 1200w, https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/GPTcrop-300x169.png 300w, https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/GPTcrop-1024x576.png 1024w, https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/GPTcrop-768x432.png 768w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" \/><figcaption id=\"caption-attachment-4814\" class=\"wp-caption-text\">DeepSeek R1 has a chatty inner monologue when thinking about how to solve a problem.<\/figcaption><\/figure><\/p>\n<p>Instead of focusing on generating more data, DeepSeek used a small amount of high-quality training data to train smaller LLMs to perform almost as well as much larger models. Then, to train the model, their researchers used techniques for reinforcement learning, a process in which an algorithm ranks the correctness of the LLMs results. This is more efficient than supervised learning, which requires humans to manually rank the output. While OpenAI has not released all of the details of their training methods, it is safe to assume that they used a combination of reinforcement learning and supervised learning during the training of o1. DeepSeek instead trained a base model on their higher quality dataset using these reinforcement techniques with a much shorter fine-tuning pass using supervised learning to improve readability. By using a smaller dataset and more efficient learning techniques during training, DeepSeek was able to train an LLM using much less powerful GPU clusters than prior LLMs. The result was DeepSeek R1: a model almost as competent as OpenAI\u2019s o1 trained for a fraction of the cost. Furthermore, R1 is open-source and much smaller than o1, meaning that almost anyone can run it on high-end consumer hardware without using the most expensive NVIDIA GPUs.<\/p>\n<p>Investors panicked. Stock prices for hardware companies fell. The news media ran <a href=\"https:\/\/www.theguardian.com\/commentisfree\/2025\/jan\/30\/ai-arms-race-china-deepseek\">headlines<\/a> with geopolitical overtones claiming that China had bested western companies in AI. Although other companies including Anthropic, the maker of Claude, and Mistral, a French AI startup, had made progress on their own reasoning models, the cost-savings introduced by R1 were not initially matched.<\/p>\n<figure id=\"attachment_4788\" aria-describedby=\"caption-attachment-4788\" style=\"width: 679px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4788 size-full\" src=\"http:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/Screenshot-from-2025-04-03-11-22-03.png\" alt=\"A screenshot of the stock price graph for the NVIDIA corporation. A sharp decline is observed on January 22nd, the day DeepSeek R1 was released. The graph trends downward from that point onward.\" width=\"679\" height=\"477\" srcset=\"https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/Screenshot-from-2025-04-03-11-22-03.png 679w, https:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/Screenshot-from-2025-04-03-11-22-03-300x211.png 300w\" sizes=\"(max-width: 679px) 100vw, 679px\" \/><figcaption id=\"caption-attachment-4788\" class=\"wp-caption-text\">The exact release date of DeepSeek R1 is directly observable in NVIDIA&#8217;s stock price graph<\/figcaption><\/figure>\n<h3>Cheaper Hardware, More Brain Power<\/h3>\n<p>Soon, competitors to R1 began to appear. Qwen QwQ, a reasoning model developed by China\u2019s Alibaba,\u00a0is an order of magnitude smaller than R1 but performs comparably on most LLM benchmarks. It\u2019s also an incredibly competent reasoning model and can be run on consumer-grade hardware. Since both R1 and QWQ are open-source models, it\u2019s very likely that we\u2019ll continue to see innovations built on their work.<\/p>\n<p>The real question going forward in the wake of these models is whether the lofty claims by hardware companies like NVIDIA about the urgent need for more GPUs will hold up. These new models still rely on GPUs to be trained, but they\u2019ve shown that high performance LLMs can be trained and deployed with less computing power than previously thought. Big AI players like OpenAI are struggling to keep their huge models running as use and capabilities expand. Sam Altman, CEO of OpenAI, has <a href=\"https:\/\/techcrunch.com\/2025\/01\/05\/openai-is-losing-money-on-its-pricey-chatgpt-pro-plan-ceo-sam-altman-says\/\">claimed<\/a> that even with OpenAI&#8217;s massive success and multiple paid tiers, ChatGPT is not profitable and effectively loses money on each prompt return.<\/p>\n<p>Although the release of these capable models that can perform on low cost hardware unsettled financial markets, they could boost the US and global economy in other ways. Models that can be trained and deployed for less can be used in many more products, making the incorporation of LLMs into technology stacks easier than ever. And with more engineers able to use these models, we can expect more innovation to be built on them. And we have yet to see how western AI companies will respond. For example, what happens when lightweight models are supported by the gigantic GPU clusters being built by OpenAI and xAI? While the emergence of these low cost reasoning models has made the ultimate winners in the AI race less clear, the wider applicability has increased the potential for overall economic growth.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In 2024, AI companies like NVIDIA and OpenAI experienced rapid growth. In particular, many investors believed that demand for high-end Graphical Processing Units (GPUs), a special kind of hardware well-suited &#8230;<\/p>\n","protected":false},"author":1,"featured_media":4780,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8,222],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.10 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Reasoning Models and the DeepSeek Panic - daed.com<\/title>\n<meta name=\"description\" content=\"Explore how cost-efficient AI reasoning models like DeepSeek R1 are shaping the future of AI. Learn about reasoning models, the rise and fall of GPU demand, and how new efficient training methods are making powerful AI more accessible.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.daed.com\/?p=4777\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Reasoning Models and the DeepSeek Panic - daed.com\" \/>\n<meta property=\"og:description\" content=\"Explore how cost-efficient AI reasoning models like DeepSeek R1 are shaping the future of AI. Learn about reasoning models, the rise and fall of GPU demand, and how new efficient training methods are making powerful AI more accessible.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.daed.com\/?p=4777\" \/>\n<meta property=\"og:site_name\" content=\"daed.com\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-08T21:18:16+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-09T15:30:07+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/DeepSeek-Smaller-Size.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1610\" \/>\n\t<meta property=\"og:image:height\" content=\"1074\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Daedalus\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daedalus\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.daed.com\/?p=4777#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.daed.com\/?p=4777\"},\"author\":{\"name\":\"Daedalus\",\"@id\":\"https:\/\/blog.daed.com\/#\/schema\/person\/ffe3d55f759956aa85792c64b0d0f984\"},\"headline\":\"Reasoning Models and the DeepSeek Panic\",\"datePublished\":\"2025-04-08T21:18:16+00:00\",\"dateModified\":\"2025-04-09T15:30:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.daed.com\/?p=4777\"},\"wordCount\":1209,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.daed.com\/#organization\"},\"articleSection\":[\"Products &amp; Culture\",\"Software Engineering\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.daed.com\/?p=4777#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.daed.com\/?p=4777\",\"url\":\"https:\/\/blog.daed.com\/?p=4777\",\"name\":\"Reasoning Models and the DeepSeek Panic - daed.com\",\"isPartOf\":{\"@id\":\"https:\/\/blog.daed.com\/#website\"},\"datePublished\":\"2025-04-08T21:18:16+00:00\",\"dateModified\":\"2025-04-09T15:30:07+00:00\",\"description\":\"Explore how cost-efficient AI reasoning models like DeepSeek R1 are shaping the future of AI. Learn about reasoning models, the rise and fall of GPU demand, and how new efficient training methods are making powerful AI more accessible.\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.daed.com\/?p=4777#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.daed.com\/?p=4777\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.daed.com\/?p=4777#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/blog.daed.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Reasoning Models and the DeepSeek Panic\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.daed.com\/#website\",\"url\":\"https:\/\/blog.daed.com\/\",\"name\":\"daed.com\",\"description\":\"research, design, and engineering thinking\",\"publisher\":{\"@id\":\"https:\/\/blog.daed.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.daed.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.daed.com\/#organization\",\"name\":\"daed.com\",\"url\":\"https:\/\/blog.daed.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.daed.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.daed.com\/wp-content\/uploads\/2019\/10\/White_Daedalus.png\",\"contentUrl\":\"https:\/\/blog.daed.com\/wp-content\/uploads\/2019\/10\/White_Daedalus.png\",\"width\":5249,\"height\":745,\"caption\":\"daed.com\"},\"image\":{\"@id\":\"https:\/\/blog.daed.com\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.daed.com\/#\/schema\/person\/ffe3d55f759956aa85792c64b0d0f984\",\"name\":\"Daedalus\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.daed.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e20c340f902dee069802693b059956ff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e20c340f902dee069802693b059956ff?s=96&d=mm&r=g\",\"caption\":\"Daedalus\"},\"url\":\"https:\/\/blog.daed.com\/?author=1\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Reasoning Models and the DeepSeek Panic - daed.com","description":"Explore how cost-efficient AI reasoning models like DeepSeek R1 are shaping the future of AI. Learn about reasoning models, the rise and fall of GPU demand, and how new efficient training methods are making powerful AI more accessible.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.daed.com\/?p=4777","og_locale":"en_US","og_type":"article","og_title":"Reasoning Models and the DeepSeek Panic - daed.com","og_description":"Explore how cost-efficient AI reasoning models like DeepSeek R1 are shaping the future of AI. Learn about reasoning models, the rise and fall of GPU demand, and how new efficient training methods are making powerful AI more accessible.","og_url":"https:\/\/blog.daed.com\/?p=4777","og_site_name":"daed.com","article_published_time":"2025-04-08T21:18:16+00:00","article_modified_time":"2025-04-09T15:30:07+00:00","og_image":[{"width":1610,"height":1074,"url":"http:\/\/blog.daed.com\/wp-content\/uploads\/2025\/04\/DeepSeek-Smaller-Size.png","type":"image\/png"}],"author":"Daedalus","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Daedalus","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.daed.com\/?p=4777#article","isPartOf":{"@id":"https:\/\/blog.daed.com\/?p=4777"},"author":{"name":"Daedalus","@id":"https:\/\/blog.daed.com\/#\/schema\/person\/ffe3d55f759956aa85792c64b0d0f984"},"headline":"Reasoning Models and the DeepSeek Panic","datePublished":"2025-04-08T21:18:16+00:00","dateModified":"2025-04-09T15:30:07+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.daed.com\/?p=4777"},"wordCount":1209,"commentCount":0,"publisher":{"@id":"https:\/\/blog.daed.com\/#organization"},"articleSection":["Products &amp; Culture","Software Engineering"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.daed.com\/?p=4777#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.daed.com\/?p=4777","url":"https:\/\/blog.daed.com\/?p=4777","name":"Reasoning Models and the DeepSeek Panic - daed.com","isPartOf":{"@id":"https:\/\/blog.daed.com\/#website"},"datePublished":"2025-04-08T21:18:16+00:00","dateModified":"2025-04-09T15:30:07+00:00","description":"Explore how cost-efficient AI reasoning models like DeepSeek R1 are shaping the future of AI. Learn about reasoning models, the rise and fall of GPU demand, and how new efficient training methods are making powerful AI more accessible.","breadcrumb":{"@id":"https:\/\/blog.daed.com\/?p=4777#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.daed.com\/?p=4777"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.daed.com\/?p=4777#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.daed.com\/"},{"@type":"ListItem","position":2,"name":"Reasoning Models and the DeepSeek Panic"}]},{"@type":"WebSite","@id":"https:\/\/blog.daed.com\/#website","url":"https:\/\/blog.daed.com\/","name":"daed.com","description":"research, design, and engineering thinking","publisher":{"@id":"https:\/\/blog.daed.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.daed.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/blog.daed.com\/#organization","name":"daed.com","url":"https:\/\/blog.daed.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.daed.com\/#\/schema\/logo\/image\/","url":"https:\/\/blog.daed.com\/wp-content\/uploads\/2019\/10\/White_Daedalus.png","contentUrl":"https:\/\/blog.daed.com\/wp-content\/uploads\/2019\/10\/White_Daedalus.png","width":5249,"height":745,"caption":"daed.com"},"image":{"@id":"https:\/\/blog.daed.com\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blog.daed.com\/#\/schema\/person\/ffe3d55f759956aa85792c64b0d0f984","name":"Daedalus","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.daed.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e20c340f902dee069802693b059956ff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e20c340f902dee069802693b059956ff?s=96&d=mm&r=g","caption":"Daedalus"},"url":"https:\/\/blog.daed.com\/?author=1"}]}},"_links":{"self":[{"href":"https:\/\/blog.daed.com\/index.php?rest_route=\/wp\/v2\/posts\/4777"}],"collection":[{"href":"https:\/\/blog.daed.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.daed.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.daed.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.daed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4777"}],"version-history":[{"count":26,"href":"https:\/\/blog.daed.com\/index.php?rest_route=\/wp\/v2\/posts\/4777\/revisions"}],"predecessor-version":[{"id":4818,"href":"https:\/\/blog.daed.com\/index.php?rest_route=\/wp\/v2\/posts\/4777\/revisions\/4818"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.daed.com\/index.php?rest_route=\/wp\/v2\/media\/4780"}],"wp:attachment":[{"href":"https:\/\/blog.daed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4777"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.daed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4777"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.daed.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4777"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}