{"id":57526,"date":"2023-08-23T09:42:22","date_gmt":"2023-08-23T13:42:22","guid":{"rendered":"https:\/\/coinscreed.com\/staging\/?p=57526"},"modified":"2023-08-23T09:45:34","modified_gmt":"2023-08-23T13:45:34","slug":"openais-customized-ai-offering-gets-mixed-reactions-from-devs","status":"publish","type":"post","link":"https:\/\/coinscreed.com\/staging\/openais-customized-ai-offering-gets-mixed-reactions-from-devs\/","title":{"rendered":"OpenAI&#8217;s Customized AI Offering Gets Mixed Reactions from Devs"},"content":{"rendered":"\n<p>OpenAI has introduced a fine-tuning option for GPT-3.5 Turbo, allowing<a href=\"https:\/\/coinscreed.com\/staging\/studio-releases-proposal-for-ai-data-transparency-standards.html\" target=\"_blank\" rel=\"noreferrer noopener\"> artificial intelligence<\/a> (AI) developers to improve performance on specific tasks using dedicated data. However, developers have both criticized and expressed enthusiasm for the development.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-1024x683.jpg\" alt=\"OpenAI's Customized AI Offering Gets Mixed Reactions from Devs\" class=\"wp-image-50729\" srcset=\"https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-1024x683.jpg 1024w, https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-300x200.jpg 300w, https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-768x512.jpg 768w, https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-1536x1024.jpg 1536w, https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-2048x1366.jpg 2048w, https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-1320x880.jpg 1320w, https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-750x500.jpg 750w, https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE-1140x760.jpg 1140w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">OpenAI's Customized AI Offering Gets Mixed Reactions from Devs<\/figcaption><\/figure>\n\n\n\n<p>OpenAI clarified that developers can tailor the capabilities of GPT-3.5 Turbo to their needs through fine-tuning. Using a data set derived from the client's business operations, a developer could, for instance, fine-tune GPT-3.5 Turbo to generate customized code or expertly summarize German legal documents.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<div class=\"embed-twitter\"><blockquote class=\"twitter-tweet\" data-width=\"550\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">You can now fine-tune GPT-3.5-Turbo!<br><br>Seems like inference is significantly more expensive (8x more) though.<br><br>My guess is that anyone with the ability to deploy their own models won\u2019t be swayed by this. <a href=\"https:\/\/t.co\/p2LbSq4D2H\" target=\"_blank\">https:\/\/t.co\/p2LbSq4D2H<span class=\"wpil-link-icon\" title=\"Link goes to external site.\" style=\"margin: 0 0 0 5px;\"><svg width=\"24\" height=\"24\" style=\"height:16px; width:16px; fill:#000000; stroke:#000000; display:inline-block;\" viewBox=\"0 0 24 24\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" xmlns:svg=\"http:\/\/www.w3.org\/2000\/svg\"><g id=\"wpil-svg-outbound-7-icon-path\" fill=\"none\" clip-path=\"url(#clip0_31_188)\">\r\n                            <path d=\"M9.16724 14.8891L20.1672 3.88908\" stroke-linecap=\"round\"\/>\r\n                            <path d=\"M13.4497 3.53554L20.5208 3.53554L20.5208 10.6066\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\r\n                            <path d=\"M17.5 13.5L17.5 16.26C17.5 17.4179 17.5 17.9968 17.2675 18.4359C17.0799 18.7902 16.7902 19.0799 16.4359 19.2675C15.9968 19.5 15.4179 19.5 14.26 19.5L7.74 19.5C6.58213 19.5 6.0032 19.5 5.56414 19.2675C5.20983 19.0799 4.92007 18.7902 4.73247 18.4359C4.5 17.9968 4.5 17.4179 4.5 16.26L4.5 9.74C4.5 8.58213 4.5 8.0032 4.73247 7.56414C4.92007 7.20983 5.20982 6.92007 5.56414 6.73247C6.0032 6.5 6.58213 6.5 7.74 6.5L11 6.5\" stroke-linecap=\"round\"\/>\r\n                        <\/g>\r\n                        <defs>\r\n                            <clipPath id=\"clip0_31_188\">\r\n                                <rect fill=\"white\" height=\"24\" width=\"24\"\/>\r\n                            <\/clipPath>\r\n                        <\/defs><\/svg><\/span><\/a><\/p>&mdash; Mark Tenenholtz (@marktenenholtz) <a href=\"https:\/\/twitter.com\/marktenenholtz\/status\/1694084743321514468?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">August 22, 2023<span class=\"wpil-link-icon\" title=\"Link goes to external site.\" style=\"margin: 0 0 0 5px;\"><svg width=\"24\" height=\"24\" style=\"height:16px; width:16px; fill:#000000; stroke:#000000; display:inline-block;\" viewBox=\"0 0 24 24\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" xmlns:svg=\"http:\/\/www.w3.org\/2000\/svg\"><use href=\"#wpil-svg-outbound-7-icon-path\"><\/use><\/svg><\/span><\/a><\/blockquote><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n<\/div><\/figure>\n\n\n\n<p>The recent announcement has prompted developers to respond with caution. According to a comment attributed to X user Joshua Segeren, adding fine-tuning to GPT-3.5 Turbo is intriguing but not a comprehensive remedy.&nbsp;<\/p>\n\n\n\n<p>According to his observations, enhancing prompts, employing vector databases for semantic queries, or transitioning to GPT-4 typically yields superior results to custom training. In addition, there are additional factors to consider, such as installation and ongoing maintenance expenses.<\/p>\n\n\n\n<p>The base GPT-3.5 Turbo models begin at $0.0004 per one thousand tokens (the basic units processed by extensive <a href=\"https:\/\/en.wikipedia.org\/wiki\/Language_model#:~:text=A%20language%20model%20is%20a,feedforward%20neural%20networks%20and%20transformers.\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">language models<span class=\"wpil-link-icon\" title=\"Link goes to external site.\" style=\"margin: 0 0 0 5px;\"><svg width=\"24\" height=\"24\" style=\"height:16px; width:16px; fill:#000000; stroke:#000000; display:inline-block;\" viewBox=\"0 0 24 24\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" xmlns:svg=\"http:\/\/www.w3.org\/2000\/svg\"><use href=\"#wpil-svg-outbound-7-icon-path\"><\/use><\/svg><\/span><\/a>). However, due to fine-tuning, the refined versions cost $0.012 per 1,000 input tokens and $0.016 per 1,000 output tokens. In addition, an initial training fee based on data volume is charged.<\/p>\n\n\n\n<p>This feature is essential for businesses and developers who wish to create personalized user interactions. For example, companies can fine-tune the model to align with their brand's voice, ensuring that the chatbot has a consistent personality and tone that complements its identity.<\/p>\n\n\n\n<p>To guarantee responsible use of the fine-tuning facility, their moderation API and the GPT-4-powered moderation system review the training data used for fine-tuning. This is performed to preserve the security attributes of the default model throughout the procedure for fine-tuning.<\/p>\n\n\n\n<p>The system identifies and removes potentially hazardous training data, ensuring the refined output conforms to OpenAI's established <a href=\"https:\/\/coinscreed.com\/staging\/the-increasing-importance-of-crypto-wallet-security.html\" target=\"_blank\" rel=\"noreferrer noopener\">security standards<\/a>. This also indicates that OpenAI has some control over the data that users input into its models.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI has introduced a fine-tuning option for GPT-3.5 Turbo, allowing artificial intelligence (AI) developers to improve performance on specific tasks using dedicated data. However, developers have both criticized and expressed enthusiasm for the development. OpenAI clarified that developers can tailor the capabilities of GPT-3.5 Turbo to their needs through fine-tuning. Using a data set derived [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":50729,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[9],"tags":[3996,8795,16153,14081],"class_list":["post-57526","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech","tag-ai","tag-developers-2","tag-gpt-3-5","tag-openai"],"jetpack_featured_media_url":"https:\/\/coinscreed.com\/staging\/wp-content\/uploads\/2023\/05\/OPE.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/posts\/57526","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/comments?post=57526"}],"version-history":[{"count":0,"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/posts\/57526\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/media\/50729"}],"wp:attachment":[{"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/media?parent=57526"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/categories?post=57526"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/coinscreed.com\/staging\/wp-json\/wp\/v2\/tags?post=57526"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}