免费观看成人欧美www色,国产成人精品日本亚洲,7777精品伊人久久久大香线蕉,四虎1515hh海外永久免费在线 ??,欧美与性动交α欧美精品

當談到自動駕駛,特斯拉的"純視覺"方案通常占據(jù)頭條。這種以視覺感知為主的技術(shù)路線,試圖通過深度學習算法分析車載攝像頭采集的影像,實現(xiàn)對車輛周圍環(huán)境的感知和理解。然而,Wayve公司的研究團隊卻另辟蹊徑,將語言交互引入自動駕駛系統(tǒng)。他們的最新成果LINGO-2,讓智能汽車不僅能聽懂駕駛員的口頭指令,還能用自然語言解釋自身決策。這一突破性進展為無人駕駛開啟了全新的可能性。

LINGO-2正在重新定義自動駕駛?

LINGO-2正在重新定義自動駕駛?

LINGO-2并非要取代視覺感知,而是與其形成互補。通過語言指令優(yōu)化決策過程,LINGO-2能夠提升自動駕駛系統(tǒng)應對復雜場景的能力。舉例來說,當遇到惡劣天氣或道路施工等特殊情況時,駕駛員可以通過語音提示車輛采取應對措施。而LINGO-2則會根據(jù)指令和自身感知,給出恰當?shù)姆磻⒆鞒鼋忉?。這種人機協(xié)同不僅增強了自動駕駛的安全性,也讓決策過程更加透明。

可以說,LINGO-2代表了自動駕駛技術(shù)的一種"另類"革命。它探索了人工智能在認知和交互層面的新邊界,豐富了無人駕駛的實現(xiàn)路徑。未來,視覺感知與語言交互勢必會深度融合。

17 April 2024|Research

LINGO-2:Driving with Natural Language

This blog introduces LINGO-2,a driving model that links vision,language,and action to explain and determine driving behavior,opening up a new dimension of control and customization for an autonomous driving experience.LINGO-2 is the first closed-loop vision-language-action driving model(VLAM)tested on public roads.

Driving with Natural Language

Driving with Natural Language

In September 2023,we introduced natural language for autonomous driving in our blog on LINGO-1,an open-loop driving commentator that was a first step towards trustworthy autonomous driving technology.In November 2023,we further improved the accuracy and trustworthiness of LINGO-1’s responses by adding a“show and tell”capability through referential segmentation.Today,we are excited to present the next step in Wayve’s pioneering work incorporating natural language to enhance our driving models:introducing LINGO-2,a closed-loop vision-language-action driving model(VLAM)that is the first driving model trained on language tested on public roads.In this blog post,we share the technical details of our approach and examples of LINGO-2’s capability to combine language and action to accelerate the safe development of Wayve’s AI driving models.

2023年9月,我們在介紹LINGO-1的博客中首次提出了在自動駕駛中應用自然語言的概念。LINGO-1是一個開環(huán)(open-loop)駕駛評論系統(tǒng),朝著實現(xiàn)值得信賴的自動駕駛技術(shù)邁出了第一步。2023年11月,我們通過增加"邊顯示邊講述"的參考分割功能,進一步提高了LINGO-1響應的準確性和可信度。今天,我們很高興地介紹Wayve公司在將自然語言融入駕駛模型方面取得的新進展:LINGO-2,這是一個閉環(huán)(closed-loop)視覺-語言-行動駕駛模型,簡稱VLAM(Vision-Language-Action Model)。它是全球首個在公共道路上進行測試的、基于語言訓練的駕駛模型。在這篇博文中,我們將分享LINGO-2的技術(shù)細節(jié),并通過示例展示它如何將語言和行動結(jié)合起來,加速Wayve的AI駕駛模型的安全開發(fā)。

Introducing LINGO-2,a closed-loop Vision-Language-Action-Model(VLAM)

LINGO-2:一個閉環(huán)的視覺-語言-行動模型

Our previous model,LINGO-1,was an open-loop driving commentator that leveraged vision-language inputs to perform visual question answering(VQA)and driving commentary on tasks such as describing scene understanding,reasoning,and attention—providing only language as an output.This research model was an important first step in using language to understand what the model comprehends about the driving scene.LINGO-2 takes that one step further,providing visibility into the decision-making process of a driving model.

我們之前的模型LINGO-1是一個開環(huán)的駕駛評論系統(tǒng)。它利用視覺和語言輸入來執(zhí)行視覺問答(Visual Question Answering,VQA),對駕駛場景進行描述、推理和關(guān)注點分析,但只能生成語言輸出。這個研究模型是我們在利用語言來理解駕駛模型對場景理解的重要一步。而LINGO-2則更進一步,它能讓我們深入了解駕駛模型的決策過程。

LINGO-2 combines vision and language as inputs and outputs,both driving action and language,to provide a continuous driving commentary of its motion planning decisions.LINGO-2 adapts its actions and explanations in accordance with various scene elements and is a strong first indication of the alignment between explanations and decision-making.By linking language and action directly,LINGO-2 sheds light on how AI systems make decisions and opens up a new level of control and customization for driving.

LINGO-2同時將視覺和語言作為輸入和輸出,在輸出駕駛動作的同時,還能生成對駕駛決策的實時解釋。它可以根據(jù)不同的場景元素來調(diào)整駕駛行為和解釋內(nèi)容,初步證明了模型的解釋與決策之間的高度一致性。通過直接關(guān)聯(lián)語言和行動,LINGO-2揭示了AI系統(tǒng)的決策機制,為實現(xiàn)可控、個性化的駕駛體驗開辟了新的可能。

While LINGO-1 could retrospectively generate commentary on driving scenarios,its commentary was not integrated with the driving model.Therefore,its observations were not informed by actual driving decisions.However,LINGO-2 can both generate real-time driving commentary and control a car.The linking of these fundamental modalities underscores the model’s profound understanding of the contextual semantics of the situation,for example,explaining that it’s slowing down for pedestrians on the road or executing an overtaking maneuver.It’s a crucial step towards enhancing trust in our assisted and autonomous driving systems.

盡管LINGO-1能夠?qū)︸{駛場景進行事后評論,但它的評論與駕駛模型是分離的,并不是基于實際的駕駛決策。而LINGO-2不僅能生成實時駕駛解說,還能直接控制汽車的行駛。將這兩個關(guān)鍵能力結(jié)合起來,凸顯了LINGO-2對場景語義有著深刻的理解。例如,它能解釋減速是因為前方有行人,或者說明正在執(zhí)行超車動作。這是我們在提高用戶對輔助駕駛和自動駕駛系統(tǒng)信任度方面邁出的關(guān)鍵一步。

It opens up new possibilities for accelerating learning with natural language by incorporating a description of driving actions and causal reasoning into the model’s training.Natural language interfaces could,even in the future,allow users to engage in conversations with the driving model,making it easier for people to understand these systems and build trust.

通過將駕駛動作和因果推理的描述納入模型訓練,LINGO-2為加速自然語言學習開辟了新的可能性。未來,自然語言交互界面甚至可以讓用戶與駕駛模型直接對話,讓大眾更容易理解和信任這些智能駕駛系統(tǒng)。

LINGO-2 Architecture:Multi-modal Transformer for Driving

LINGO-2架構(gòu):用于駕駛的多模態(tài)Transformer網(wǎng)絡(luò)

用于駕駛的多模態(tài)Transformer網(wǎng)絡(luò)

用于駕駛的多模態(tài)Transformer網(wǎng)絡(luò)

LINGO-2 architecture

LINGO-2 consists of two modules:the Wayve vision model and the auto-regressive language model.The vision model processes camera images of consecutive timestamps into a sequence of tokens.These tokens and additional conditioning variables–such as route,current speed,and speed limit–are fed into the language model.Equipped with these inputs,the language model is trained to predict a driving trajectory and commentary text.Then,the car’s controller executes the driving trajectory.

LINGO-2包含兩個主要模塊:Wayve視覺模型和自回歸語言模型。視覺模型將連續(xù)多幀相機圖像轉(zhuǎn)化為一系列tokens(可以理解為視覺特征)。這些tokens與其他控制變量(如路線、當前速度和限速等)一起輸入到語言模型中。語言模型基于這些輸入,學習預測駕駛軌跡和生成相應的解釋文本。最后,汽車控制器負責執(zhí)行規(guī)劃好的駕駛軌跡。

LINGO-2’s New Capabilities

The integration of language and driving opens up new capabilities for autonomous driving and human-vehicle interaction,including:

1.Adapting driving behavior through language prompts:We can prompt LINGO-2 with constrained navigation commands(e.g.,“pull over,”“turn right,”etc.)and adapt the vehicle’s behavior.This has the potential to aid model training or,in some cases,enhance human-vehicle interaction.

2.Interrogating the AI model in real-time:LINGO-2 can predict and respond to questions about the scene and its decisions while driving.

3.Capturing real-time driving commentary:By linking vision,language,and action,LINGO-2 can leverage language to explain what it’s doing and why,shedding light on the AI’s decision-making process.

LINGO-2的新功能

將語言與駕駛控制結(jié)合,為自動駕駛和人機交互帶來了諸多新的可能性,例如:

1.通過語言指令調(diào)整駕駛行為:我們可以用一些特定的導航命令(如"靠邊停車"、"右轉(zhuǎn)"等)來指示LINGO-2,從而改變車輛行駛方式。這有助于優(yōu)化模型訓練,在某些情況下還能提升人車交互體驗。

2.實時詢問AI模型:駕駛過程中,LINGO-2能夠根據(jù)提問預測并回答與場景理解和決策相關(guān)的問題。

3.獲取實時駕駛解說:通過關(guān)聯(lián)視覺、語言和行動,LINGO-2能利用語言解釋它當前的駕駛行為以及背后的原因,讓我們更清楚地了解AI的決策過程。

We’ll explore these use cases in the sections below,showing examples of how we’ve tested LINGO-2 in our neural simulator Ghost Gym.Ghost Gym creates photorealistic 4D worlds for training,testing,and debugging our end-to-end AI driving models.Given the speed and complexity of real-world driving,we leverage offline simulation tools like Ghost Gym to evaluate the robustness of LINGO-2’s features first.

In this setup,LINGO-2 can freely navigate through an ever-changing synthetic environment,where we can run our model against the same scenarios with different language instructions and observe how it adapts its behavior.We can gain deep insights and rigorously test how the model behaves in complex driving scenarios,communicates its actions,and responds to linguistic instructions.

接下來,我們將通過在虛擬仿真環(huán)境Ghost Gym中測試LINGO-2的幾個案例,來進一步探討這些功能的應用。Ghost Gym是一個逼真的4D虛擬世界,用于端到端AI駕駛模型的訓練、測試和調(diào)試??紤]到真實世界駕駛的高速性和復雜性,我們首先利用Ghost Gym這樣的離線仿真工具來評估LINGO-2的性能和穩(wěn)定性。

在Ghost Gym中,LINGO-2可以在不斷變化的虛擬場景中自由導航。我們可以讓模型在同一場景下執(zhí)行不同的語言指令,觀察它如何相應地調(diào)整駕駛行為。這使我們能夠深入分析模型在復雜駕駛場景下的決策機制,了解它如何描述自己的行動,以及它對語言指令的響應能力。

Adapting Driving Behavior through Linguistic Instructions

通過語言指令調(diào)整駕駛行為

LINGO-2 uniquely allows driving instruction through natural language.To do this,we swap the order of text tokens and driving action,which means language becomes a prompt for the driving behavior.This section demonstrates the model’s ability to change its behavior in our neural simulator in response to language prompts for training purposes.This new capability opens up a new dimension of control and customization.The user can give commands or suggest alternative actions to the model.This is of particular value for training our AI and offers promise to enhance human-vehicle interaction for applications related to advanced driver assistance systems.In the examples below,we observe the same scenes repeated,with LINGO-2 adapting its behavior to follow linguistic instructions.

LINGO-2的一大特色是可以通過自然語言來指揮駕駛。為了實現(xiàn)這一點,我們調(diào)換了文本token(可以理解為詞匯單元)和駕駛動作的順序,使得語言指令成為駕駛行為的先導。本節(jié)將展示該模型在我們的虛擬仿真器中根據(jù)語言提示改變駕駛行為的能力,這對于模型訓練大有裨益。這一全新功能為智能駕駛的控制和個性化開啟了新的維度。用戶可以向模型下達指令或建議替代動作。這不僅有利于優(yōu)化我們的AI模型訓練,還有望改善高級駕駛輔助系統(tǒng)中的人機交互體驗。接下來,我們將通過一些示例來觀察LINGO-2如何根據(jù)語言指令靈活調(diào)整駕駛行為。

Example 1:Navigating a junction

示例一:路口導航

In the three videos below,LINGO-2 navigates the same junction but is given different instructions:“turning left,clear road,”“turning right,clear road,”and“stopping at the give way line.”We observe that LINGO-2 can follow the instructions,reflected by different driving behaviors at the intersection.

在下面三個視頻中,LINGO-2駕車通過同一個路口,但我們給出了三種不同的指令:"左轉(zhuǎn),道路通暢"、"右轉(zhuǎn),道路通暢"以及"在讓行線處停車"。我們可以看到,LINGO-2能夠遵照指令,在路口執(zhí)行相應的駕駛動作。

LINGO-2 左轉(zhuǎn)提示

Example of LINGO-2 driving in Ghost Gym and being prompted to turn left on a clear road.(視頻:LINGO-2在Ghost Gym中根據(jù)提示"道路通暢,左轉(zhuǎn)"執(zhí)行左轉(zhuǎn))

LINGO-2 右轉(zhuǎn)提示

Example of LINGO-2 driving in Ghost Gym and being prompted to turn right on a clear road.(視頻:LINGO-2在Ghost Gym中根據(jù)提示"道路通暢,右轉(zhuǎn)"執(zhí)行右轉(zhuǎn))

LINGO-2停車提示

Example of LINGO-2 driving in Ghost Gym and being prompted to stop at the give-way line.(視頻:LINGO-2在Ghost Gym中根據(jù)提示"在讓行線處停車"而停車)

Example 2:Navigating a bus

示例二:與公交車互動

In the two videos below,LINGO-2 navigates around a bus.We can observe that LINGO-2 can follow the instructions to either hold back and“stop behind the bus”or“accelerate and overtake the bus.”

接下來的兩個視頻展示了LINGO-2與公交車交互的場景。我們可以看到,LINGO-2能夠根據(jù)"停在公交車后"或"超車并超越公交車"的指令采取相應的行動。

圖片Example of LINGO-2 in Wayve’s Ghost Gym stopping behind the bus when instructed.(視頻:LINGO-2在Wayve公司的Ghost Gym模擬器中根據(jù)指示停在公交車后方)

圖片Example of LINGO-2 in Wayve’s Ghost Gym overtaking a bus when instructed by text.(視頻:LINGO-2在Wayve公司的Ghost Gym模擬器中根據(jù)文字指令超越公交車)

Example 3:Driving in a residential area

示例三:住宅區(qū)駕駛

In the two videos below,LINGO-2 responds to linguistic instruction when driving in a residential area.It can correctly respond to the prompts“continue straight to follow the route”or“slow down for an upcoming turn.”

最后這兩個視頻展示了LINGO-2在住宅區(qū)道路上對語音指令的反應。它可以準確理解和執(zhí)行"繼續(xù)直行,沿路線行駛"或"減速,準備轉(zhuǎn)彎"等命令。

LINGO-2根據(jù)文字指令保持直線行駛

Example of LINGO-2 in Wayve’s Ghost Gym driving straight when instructed bytext.(視頻:LINGO-2在Ghost Gym中根據(jù)文字指令保持直線行駛)

LINGO-2文字指令向右轉(zhuǎn)彎

Example of LINGO-2 in Wayve’s Ghost Gym turning right when instructed by text.(視頻:LINGO-2在Ghost Gym中根據(jù)文字指令向右轉(zhuǎn)彎)

Interrogating an AI model in real-time:Video Question Answering(VQA)

實時問詢AI模型:視頻問答(VQA)功能

Another possibility for language is to develop a layer of interaction between the robot car and the user that can give confidence in the decision-making capability of the driving model.Unlike our previous LINGO-1 research model,which could only answer questions retrospectively and was not directly connected to decision-making,LINGO-2 allows us to interrogate and prompt the actual model that is driving.

語言交互的另一個應用是在無人駕駛汽車和乘客之間建立一個對話界面,增強乘客對車輛決策能力的信心。不同于此前的LINGO-1研究模型只能事后回答問題,且與決策過程無直接關(guān)聯(lián),LINGO-2允許我們實時詢問和指示當前行駛中的模型。

Example 4:Traffic Lights

示例四:交通信號燈

In this example,we show LINGO-2 driving through an intersection.When we ask the model,“What is the color of the traffic lights?”it correctly responds,“The traffic lights are green.”

這個例子中,LINGO-2駕車通過一個路口。當我們問模型"交通信號燈是什么顏色?"時,它正確回答:"交通信號燈是綠色。"

LINGO-2演示視頻問答功能

LINGO-2演示視頻問答功能

 

Example of LINGO-2 VQA in Ghost Gym(視頻:LINGO-2在Ghost Gym中演示視頻問答功能)

Example 5:Hazard Identification

示例五:危險識別

In this example,LINGO-2 is prompted by the question,“Are there any hazards ahead of you?”It correctly identifies that“Yes,there is a cyclist ahead of me,which is why I am decelerating.”

在這個場景中,我們問LINGO-2:"前方有潛在危險嗎?"它正確指出:"是的,前方有一名騎自行車者,所以我在減速。"

圖片Example of LINGO-2 VQA in Ghost Gym(視頻:LINGO-2在Ghost Gym中演示視頻問答功能)

Example 6:Weather

示例六:天氣情況

In the following three examples,we ask LINGO-2 to describe“What is the weather like?”It can correctly identify that the weather ranges from“very cloudy,there is no sign of the sun”to“sunny”to“the weather is clear with a blue sky and scattered clouds.”

接下來的三個片段中,我們讓LINGO-2回答"現(xiàn)在天氣如何?"。它分別給出了"烏云密布,看不到太陽"、"天氣晴朗"和"晴空萬里,偶有浮云"的恰當描述。

Example of LINGO-2 VQA in Ghost Gym(視頻:LINGO-2在Ghost Gym中演示視頻問答功能)

Limitations

目前的局限性

LINGO-2 marks a step-change in our progress to leverage natural language to enhance our AI driving models.While we are excited about the progress we are making,we also want to describe the current limitations of the model.

LINGO-2標志著我們在利用自然語言優(yōu)化AI駕駛模型方面取得了突破性進展。盡管如此,我們也意識到該模型目前還存在一些局限性。

Language explanations from the driving model give us a strong idea of what the model might be thinking.However,more work is needed to quantify the alignment between explanations and decision-making.Future work will quantify and strengthen the connection between language,vision,and driving to reliably debug and explain model decisions.We expect to show in the real world that adding intermediate language reasoning in“chain-of-thought”driving helps solve edge cases and counterfactuals.

從模型給出的語言解釋,我們可以大致了解其決策依據(jù)。但要準確衡量解釋與決策的吻合程度,還需要做更多工作。未來,我們將著力量化語言、視覺、駕駛?cè)咧g的關(guān)聯(lián),增強模型決策的可解釋性和可靠性。我們希望通過實車測試證明,在"思維鏈"駕駛中融入語言推理有助于應對極端和反事實場景。

Additionally,we plan to investigate whether controlling the car’s behavior with language in real-world settings can be done reliably and safely.Ghost Gym provides a safe off-road environment for testing,but more work needs to be done to ensure the model is robust to noise and misinterpretation of the commands.It should understand the context of human instructions while never violating appropriate limits of safe and responsible driving behavior.This functionality will be more suited to aid model testing and training for fully automated driving systems.

此外,我們還將探索如何在真實道路環(huán)境中讓語言安全、可靠地指揮車輛行駛。Ghost Gym提供了一個安全的虛擬測試空間,但要確保模型能正確理解口語化的指令且不受噪音干擾,還需要投入更多精力。模型必須能夠準確把握人類指令的意圖,同時嚴格遵守安全駕駛的基本原則。這一功能更適合應用于全自動駕駛系統(tǒng)的模型測試和訓練。

Conclusion

In this post,we have introduced LINGO-2,the first driving model trained on language that has driven on public roads.We are excited to showcase how LINGO-2 can respond to language instruction and explain its driving actions in real-time.This is a first step towards building embodied AI that can perform multiple tasks,starting with language and driving.

結(jié)語

本文介紹了LINGO-2,這是首個接受語言指令訓練并在公開道路上測試的駕駛模型。令人振奮的是,LINGO-2能夠?qū)φZ言提示作出實時反應,并對駕駛決策給出清晰解釋。這為我們構(gòu)建多功能智能體邁出了關(guān)鍵的一步。

Wayve是一家位于英國的自動駕駛技術(shù)初創(chuàng)公司,成立于2017年。與許多其他自動駕駛公司不同,Wayve的核心理念是通過端到端深度學習,讓人工智能系統(tǒng)像人類一樣學習駕駛技能。

以下是Wayve的一些關(guān)鍵特點:

1.端到端學習:Wayve的自動駕駛系統(tǒng)直接將感知信息(如攝像頭圖像)映射到車輛控制指令,無需手工設(shè)計的中間步驟。這種端到端學習方法讓系統(tǒng)能夠自主發(fā)現(xiàn)最優(yōu)的駕駛策略。

2.少量數(shù)據(jù)學習:與需要海量數(shù)據(jù)訓練的傳統(tǒng)方法相比,Wayve的AI系統(tǒng)能夠從較少的數(shù)據(jù)中快速學習,更加靈活和適應性強。

3.模擬到現(xiàn)實:Wayve先在虛擬環(huán)境中訓練AI模型,再將其遷移到真實世界的汽車上進行微調(diào)。這種"模擬到現(xiàn)實"的方法大大加快了開發(fā)進度。

4.多模態(tài)融合:除了視覺信息,Wayve還嘗試將自然語言指令整合到自動駕駛決策中。LINGO項目就是探索語言交互在無人駕駛中的應用。

5.安全與倫理:Wayve高度重視自動駕駛的安全性和倫理問題,致力于打造可靠、透明、符合社會期望的無人車系統(tǒng)。

總的來說,Wayve代表了自動駕駛技術(shù)的一種創(chuàng)新思路。他們的研究成果有望加速自動駕駛的發(fā)展,為未來交通出行帶來革命性變化。盡管目前還處于探索階段,但Wayve的嘗試無疑為無人駕駛領(lǐng)域注入了新的活力。

關(guān)注中國IDC圈官方微信:idc-quan 我們將定期推送IDC產(chǎn)業(yè)最新資訊

查看心情排行你看到此篇文章的感受是:


  • 支持

  • 高興

  • 震驚

  • 憤怒

  • 無聊

  • 無奈

  • 謊言

  • 槍稿

  • 不解

  • 標題黨
2024-11-22 10:25:56
出海進行時 文遠知行自動駕駛為何出海新加坡?
中國自動駕駛企業(yè)在新加坡開拓市場,進行全球化技術(shù)驗證和生態(tài)構(gòu)建,通過精準路徑和協(xié)同,加速技術(shù)國際化進程。 <詳情>
2024-11-22 10:25:56
國際資訊 文遠知行自動駕駛為何出海新加坡?
中國自動駕駛企業(yè)在新加坡開拓市場,進行全球化技術(shù)驗證和生態(tài)構(gòu)建,通過精準路徑和協(xié)同,加速技術(shù)國際化進程。 <詳情>
2024-10-25 11:19:34
算力新聞 1天3件大事|中國自動駕駛市場即將全面提速
在自動駕駛市場活躍的背景下,國內(nèi)一系列政策密集出臺也進一步推動自動駕駛行業(yè)的發(fā)展。 <詳情>
2024-10-21 10:23:41
2024-09-23 15:50:19
算力新聞 4大模型中標最高1.7億;北京亦莊加速高階自動駕駛;鯨智大模型發(fā)布;首個百億級遙感大模型發(fā)布...|一周產(chǎn)業(yè)盤點
[開物有新]數(shù)字科技產(chǎn)業(yè)一周熱聞大盤點,過去一周有哪些政策及行業(yè)動態(tài)、前沿科技、大事件,一起回顧。 <詳情>
南方萬國數(shù)據(jù)中心REIT宣布定價3元/份,獲超百倍認購,7月14日正式發(fā)售!
2025-07-03 16:59:52
綠電直連 vs 綠證 數(shù)據(jù)中心應該怎么選?
2025-07-03 16:36:38
迎接關(guān)鍵轉(zhuǎn)型期:中國第三方算力中心服務商應對之道
2025-07-03 16:31:42
觀察|幾萬塊GPU、毫秒級變化……AI算力需求對智算中心供配電沖擊有多大?
2025-07-03 16:27:45
馬來西亞電費新政:取消階梯電價,數(shù)據(jù)中心面臨挑戰(zhàn)與機遇
2025-07-03 16:25:43
總投資約45億元 東方國信內(nèi)蒙古智算中心項目1號樓投產(chǎn)
2025-07-03 16:23:12
2025中國智算產(chǎn)業(yè)生態(tài)發(fā)展大會中交智數(shù)谷(寧夏·中衛(wèi))專場成功舉辦
2025-07-03 16:21:11
總投資1.3億 仙桃小寺垸智算中心項目正式開工
2025-07-03 16:19:13
科智咨詢《2025中國智算產(chǎn)業(yè)生態(tài)圖譜》發(fā)布
2025-07-03 16:17:42
同比增長超100% 《中國智算中心供配電系統(tǒng)應用市場研究報告(2025)》正式發(fā)布
2025-07-03 16:15:37
“算力產(chǎn)業(yè)創(chuàng)新實踐案例”揭曉 彰顯算力多樣化創(chuàng)新賦能
2025-07-03 16:06:57
總投資6.2億元,桂林華為云計算數(shù)據(jù)中心項目二期一階段將在7月底完成驗收
2025-07-03 16:03:33
“交”匯算力,“智”啟未來:薈聚產(chǎn)業(yè)新價值,共譜智算新篇章
2025-07-03 16:00:06
客戶案例丨中信建投證劵攜手中企通信 數(shù)字創(chuàng)新書寫金融“五篇大文章”
2025-07-03 15:54:04
吸引投資600多億元,韶關(guān)算力產(chǎn)業(yè)實現(xiàn)跨越式發(fā)展
2025-07-03 15:49:53