这件事情发生在 2024 年 4 月份。当时把过程记录在 Apple Notes 中,我本来没打算把它发出来,因为这是一件很私人的事情。但后来我在网上看到了很多类似的病例,很多人在经历着同样的困扰。所以我想把我的经历整理出来分享给大家,希望能对遇到类似情况的朋友有所帮助。
在发病前的几天我注意到左耳开始出现短暂的低频耳鸣,但当时并没有太在意。因为之前也有过类似的经历,我以为是普通的疲劳反应,休息一下就会好。
我在很小的时候就有发现我的耳朵能听到耳后的血流声音,为什么我能确定是血流声呢,因为我尝试按住耳后皮肤下的血管就不响了。
成年以后就没再出现过,但有时候会出现耳鸣的情况,不过很快就消失了。
另外再说一件小事,我能自主控制我的左耳活动,就是能够像猫、狗、兔子或者其他哺乳类动物一样,通过神经让耳朵活动起来 很明显,我不知道怎么做到的,但很神奇。
那天晚上大概一点左右,我正抱着 MacBook 工作,左耳突然传来持续的嗡嗡声。声音特别大,大到让我完全无法继续工作。我赶紧停止工作,平躺休息,希望能缓解症状。
我最开始很担心是脑部神经的问题,所以赶紧做了个 CT 检查。检查结果显示一切正常,这让我稍微放心了一些。
但经过耳鼻喉科的纯音测试后,医生确诊为:突发性耳聋。这个诊断结果真的让我很震惊,我从来没想过自己会 “突聋”。

医生给我开了以下治疗方案:
治疗思路主要是改善耳蜗周边供血,为耳蜗提供营养。
医生告诉我治愈率大概在 70% 左右,这个数字给了我一些希望。不过医生说高压氧治疗可能会有帮助,但可惜他们医院没有这个设备。
因为治疗效果不太理想,我开始考虑其他治疗方案。我去了北京协和医院,那里的医生给出的治疗方案和清华长庚的差不多,我问高压氧是否有帮助,医生很耐心地给我解释了高压氧治疗的利弊。说高压氧治疗需要患者配合度比较高,而且可能会对身体产生一定压力。还特别强调,高压氧并不是治疗耳聋耳鸣的必要手段。
之后我又去了北京同仁医院,同仁医院在耳鼻喉方面比较权威,这里的医生给出了同样的方案,并解释了银杏叶提取物的血管舒张效果,让我对药物作用有了更清晰的认识。医生也建议高压氧治疗,但同样没有这个设备。不过根据纯音测试结果,医生说我的情况主要集中在低频区,恢复的可能性比较高。
4/10 在解放军医院(301)这里有高压氧舱,预约的同时尝试了针灸治疗。我对中医至今仍持有负面看法,但在这种处境下,我还是决定尝试一下。去到针灸诊室,我看到一些病人,以老年人为主,也有一些耳聋患者。为了缓解紧张情绪,我和医生攀谈起来。我问医生:”针灸对我这个病情有帮助吗?会不会扎到神经然后就动不了了啊?”医生回答说:”这个你可以问问他们(指病房内的其他病人),有人做了说有效,你可以试试,有用你就继续,没用你就换别的方案。”趴在床上,医生用很细的针扎在头部和指尖背部的位置,同时配置红外热灯照射30分钟左右。做完以后,我不知道是不是心理作用,似乎有所缓解。
4/11 昨天的医生没在,换了另一位医师,用了不同的方式,用了一根更粗的针在脖颈后面针刺,很深,能听到针扎进肉里的声音。
4/11 终于有机会接受高压氧舱治疗,对这个新疗法我充满期待。房间里大概有十几个人,大家都有不同的经历。坐在我左边的女士在新冠感染后单耳听力完全丧失,她怀疑与接种疫苗有关。坐在我前面的男士是火灾导致大面积烧伤,还有一位女士是被外卖小哥撞了,脑部受损导致听力受损。还有一个小女孩一直在哭,妈妈抱着她,姥姥在旁边陪着。看着这些病友,我的心情很复杂。
然而,经过这些治疗,我并没有明显改善。
刷推时,发现一位推友也遭遇了类似的情况
挂了六天水之后,已经基本恢复了,现在还有点耳闷的感觉,医生说要多修养,不然可能会复发
— Ehco (@Ehco1996) April 14, 2024
我发现 v2 上也有一些病友,程序员这个职业似乎还挺容易得这个病的(病因大多是熬夜焦虑压力...
虽然我平时一直说注意身体,但真等大锤落到我身上时,我才明白深刻身体的重要,以后顶不住了就多休息😶 https://t.co/ycDOzApTFN pic.twitter.com/rAlbkhgOTk
看到他的经历,我和他交流了一下患病心得 抱团取暖

在治疗期间,我查阅了大量相关研究资料,希望能更好地了解自己的病情。了解得越多,反而越觉得这个病挺复杂的。
看到千手观音表演者庆大霉素致聋的案例非常震惊,想不到背后还有这样的故事。
我又联想到之前说的餐饮商家往菜里添加止泻药防止因食物卫生问题导致腹泻的案例,不由得想到最近这么多突聋案例会不会和这个有关。
现在的时间是 2025 年的 5 月份,时间已经过去一年了。我的耳鸣依然持续存在,只是有时强有时弱,它无时无刻伴随着我,时间久了我渐渐适应了它的存在。要说影响,我之前是能非常准确辨别一个声音是来自哪一个方位的,现在辨别起来有点困难。之前能分辨非常微弱的声音,现在也有点困难。
这段经历让我深刻体会到健康的重要性,也让我对突发性耳聋有了更深入的了解。生病之前我从来没想过耳朵会出问题,更没想到的是在互联网从业者这个群体中如此高发,通过与大家的交流,我们发现病例之间存在一些相似性,基本都是熬夜工作 压力比较大造成的,很多人关心是否与长时间佩戴耳机有关,我感觉似乎关联不大,比如我就没有很经常使用耳机。另外突发性耳聋的成因在医学上并不能完全明确,而不明确的成因往往意味着无法得到根本性的治疗,所以在此提醒大家注意休息 不要熬夜,爱惜身体 千万不要得病 不然真的不好治疗。如果你也遇到了类似的问题,建议及时就医。希望通过分享我的经历,能够帮助到其他可能遇到类似情况的朋友。
ChatGPT is undoubtedly familiar to many. From research and development to education, from sales to even your local barber Tony, it’s a name known to all.
In an era marked by rapid advancements in artificial intelligence, ChatGPT has provided numerous conveniences and innovations.
Today, we delve deep into the definition of ChatGPT, its history and evolution, application scenarios, and future prospects, hoping to enlighten and aid our readers.
ChatGPT is an application of natural language processing based on AI technology. A technology rooted in deep learning neural networks, it utilizes vast text data for pre-training, enabling auto-generation and manipulation of text. Specifically, ChatGPT stands for “Generative Pre-trained Transformer.” Its capabilities span:
[Transformer] <- [GPT] <- [GPT 2] <- [GPT 3] <- [InstructGPT]
In August 2017, Google released a blog titled Transformer: A Novel Neural Network Architecture for Language Understanding, introducing the Transformer neural network architecture for language understanding tasks. Transformers, with their self-attention mechanisms, replaced traditional RNNs and CNNs, effectively processing varying input sequence lengths, achieving optimal or near-optimal performance in translation, Q&A, and summary tasks.
By June 2018, OpenAI introduced its first pre-trained language model, GPT-1, based on Transformer, pre-trained on over 800 billion words.
In November 2019, the GPT-2 language model gained widespread attention for its ability to autonomously generate natural language text. Built atop the latest deep learning model with over 150 million parameters, GPT-2 can produce incredibly realistic and fluent text. It has been employed in text-generation applications, including chatbots, writing assistants, and smart customer service.
GPT-3 further emphasizes representation capability and diversity. Upgraded versions consistently improve ChatGPT’s performance and efficiency, enabling it to tackle an increasing number of language tasks.
InstructGPT was trained to follow commands in prompts and provide detailed responses. Converting the GPT-3 model to the InstructionGPT model required a three-step procedure devised by OpenAI. Firstly, the model is fine-tuned. Secondly, a reward model (RM) is established. Thirdly, supervised fine-tuning (SFT) is implemented, followed by further fine-tuning via reinforcement learning.
InstructGPT has its upsides compared to GPT-3, better aligning with human preferences. However, this can also be its downside. Malicious users might exploit it to degrade model truthfulness and utility, possibly causing harm.
Nonetheless, InstructGPT is not only superior to GPT-3 in following commands but also aligns better with human intent. The AI alignment issue is well-known in the industry. It pinpoints the challenge of designing AI systems that understand our values and beliefs without disrupting them.
According to OpenAI, this is the first application of alignment, demonstrating that these techniques significantly enhance the alignment of general AI systems with human intent. The InstructGPT model is now deployed as the default language model on OpenAI’s API.
In ChatGPT’s operation, context comprehension is crucial. Through prior dialogue, ChatGPT can better grasp the context of subsequent queries, generating more accurate responses. However, every API call will consume a certain number of tokens, linked to the input text’s length.
The future prospects for ChatGPT are vast. As AI technology continues to evolve:
To harness ChatGPT effectively, users might need to provide “fact assumptions” or dialogue backgrounds. With the Fine-tuning technique, users can customize datasets and models, thereby achieving better performance and adaptability.
In conclusion, ChatGPT is an excellent natural language processing model. Through Fine-tuning techniques, users can tailor datasets and models, resulting in superior performance and adaptability. ChatGPT will continue to expand its application areas, aiming for intelligent, natural, and human-centric language interactions.