Parse正式發(fā)布開(kāi)源PHP SDK,parsesdk
Jun 13, 2016 am 09:27 AMParse正式發(fā)布開(kāi)源PHP SDK,parsesdk
Pare 發(fā)布 了 Parse PHP SDK ,旨在使Parse能夠集成“到一類新的應(yīng)用程序和不同的使用場(chǎng)景?!绷硗?,該公司聲稱,這是他們的“第一個(gè)面向服務(wù)器端語(yǔ)言的SDK,而且是第一個(gè)真正開(kāi)源的SDK?!?/p>
到目前為止,Parse提供了若干API庫(kù),旨在使前端可以更容易地集成Parse,其中包括對(duì)Objective-C、Java、.NET和JavaScript的支持。另外,Parse通過(guò)REST在本地公開(kāi)接口。這些庫(kù)涵蓋了Parse的主要使用場(chǎng)景,這使得開(kāi)發(fā)人員不用“ 為其應(yīng)用程序需要訪問(wèn)的每個(gè)服務(wù)重新開(kāi)發(fā)他們自己的后端 ”,比如,需要 管理服務(wù)器及編寫(xiě)服務(wù)器端代碼 。
另一方面,Parse還基于他們自己的JavaScript SDK提供了一個(gè) Cloud Code環(huán)境 ,用于服務(wù)器端需要一些邏輯的場(chǎng)景。比如,Parse Cloud Code帶來(lái)的好處之一是, 更新對(duì)所有的環(huán)境都立即可用,而不需要等到新的應(yīng)用程序發(fā)布,如此一來(lái),功能就可以動(dòng)態(tài)地修改。隨著Parse PHP SDK的推出,使用PHP現(xiàn)在也可以獲得同樣的好處。
Parse PHP SDK與其它Parse SDK結(jié)構(gòu)類似,它圍繞ParseObject構(gòu)建,后者包含無(wú)模式且兼容JSON的數(shù)據(jù)的鍵值對(duì)。PFObject能夠被保存、檢索、更新和刪除。查詢通過(guò)PFQuery建模,它既允許基本查詢,又允許關(guān)系查詢。另外,Parse還支持 基于角色的訪問(wèn)控制 ,這提供了一種邏輯方法,將對(duì)Parse數(shù)據(jù)有相同訪問(wèn)權(quán)限的用戶分組。
Niraj Shah是英國(guó)倫敦的一名PHP開(kāi)發(fā)人員,他已經(jīng)創(chuàng)建了一個(gè) Parse PHP SDK簡(jiǎn)易入門教程 。該教程旨在將事情簡(jiǎn)單化,Niraj說(shuō),Parse PHP SDK的“文檔組織不是很好,為了找出完整的解決方案,你可能不得不在文檔之間跳來(lái)跳去。”
附上 Parse開(kāi)源php sdk下載地址: http://www.bkjia.com/codes/203051.html
國(guó)內(nèi)
ECSHOP
ECMall
Destoon
MvMmall
————————
國(guó)外
MAGENTO
OSCOMMERCE
OSCMAX
ZEN CART
CUBE CART
AGORA CART
?
discuz,織夢(mèng),joomla
?

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Text annotation is the work of corresponding labels or tags to specific content in text. Its main purpose is to provide additional information to the text for deeper analysis and processing, especially in the field of artificial intelligence. Text annotation is crucial for supervised machine learning tasks in artificial intelligence applications. It is used to train AI models to help more accurately understand natural language text information and improve the performance of tasks such as text classification, sentiment analysis, and language translation. Through text annotation, we can teach AI models to recognize entities in text, understand context, and make accurate predictions when new similar data appears. This article mainly recommends some better open source text annotation tools. 1.LabelStudiohttps://github.com/Hu

Image annotation is the process of associating labels or descriptive information with images to give deeper meaning and explanation to the image content. This process is critical to machine learning, which helps train vision models to more accurately identify individual elements in images. By adding annotations to images, the computer can understand the semantics and context behind the images, thereby improving the ability to understand and analyze the image content. Image annotation has a wide range of applications, covering many fields, such as computer vision, natural language processing, and graph vision models. It has a wide range of applications, such as assisting vehicles in identifying obstacles on the road, and helping in the detection and diagnosis of diseases through medical image recognition. . This article mainly recommends some better open source and free image annotation tools. 1.Makesens

Face detection and recognition technology is already a relatively mature and widely used technology. Currently, the most widely used Internet application language is JS. Implementing face detection and recognition on the Web front-end has advantages and disadvantages compared to back-end face recognition. Advantages include reducing network interaction and real-time recognition, which greatly shortens user waiting time and improves user experience; disadvantages include: being limited by model size, the accuracy is also limited. How to use js to implement face detection on the web? In order to implement face recognition on the Web, you need to be familiar with related programming languages ??and technologies, such as JavaScript, HTML, CSS, WebRTC, etc. At the same time, you also need to master relevant computer vision and artificial intelligence technologies. It is worth noting that due to the design of the Web side

Audiences familiar with "Westworld" know that this show is set in a huge high-tech adult theme park in the future world. The robots have similar behavioral capabilities to humans, and can remember what they see and hear, and repeat the core storyline. Every day, these robots will be reset and returned to their initial state. After the release of the Stanford paper "Generative Agents: Interactive Simulacra of Human Behavior", this scenario is no longer limited to movies and TV series. AI has successfully reproduced this scene in Smallville's "Virtual Town" 》Overview map paper address: https://arxiv.org/pdf/2304.03442v1.pdf

New SOTA for multimodal document understanding capabilities! Alibaba's mPLUG team released the latest open source work mPLUG-DocOwl1.5, which proposed a series of solutions to address the four major challenges of high-resolution image text recognition, general document structure understanding, instruction following, and introduction of external knowledge. Without further ado, let’s look at the effects first. One-click recognition and conversion of charts with complex structures into Markdown format: Charts of different styles are available: More detailed text recognition and positioning can also be easily handled: Detailed explanations of document understanding can also be given: You know, "Document Understanding" is currently An important scenario for the implementation of large language models. There are many products on the market to assist document reading. Some of them mainly use OCR systems for text recognition and cooperate with LLM for text processing.

Paper address: https://arxiv.org/abs/2307.09283 Code address: https://github.com/THU-MIG/RepViTRepViT performs well in the mobile ViT architecture and shows significant advantages. Next, we explore the contributions of this study. It is mentioned in the article that lightweight ViTs generally perform better than lightweight CNNs on visual tasks, mainly due to their multi-head self-attention module (MSHA) that allows the model to learn global representations. However, the architectural differences between lightweight ViTs and lightweight CNNs have not been fully studied. In this study, the authors integrated lightweight ViTs into the effective

FP8 and lower floating point quantification precision are no longer the "patent" of H100! Lao Huang wanted everyone to use INT8/INT4, and the Microsoft DeepSpeed ??team started running FP6 on A100 without official support from NVIDIA. Test results show that the new method TC-FPx's FP6 quantization on A100 is close to or occasionally faster than INT4, and has higher accuracy than the latter. On top of this, there is also end-to-end large model support, which has been open sourced and integrated into deep learning inference frameworks such as DeepSpeed. This result also has an immediate effect on accelerating large models - under this framework, using a single card to run Llama, the throughput is 2.65 times higher than that of dual cards. one

Let me introduce to you the latest AIGC open source project-AnimagineXL3.1. This project is the latest iteration of the anime-themed text-to-image model, aiming to provide users with a more optimized and powerful anime image generation experience. In AnimagineXL3.1, the development team focused on optimizing several key aspects to ensure that the model reaches new heights in performance and functionality. First, they expanded the training data to include not only game character data from previous versions, but also data from many other well-known anime series into the training set. This move enriches the model's knowledge base, allowing it to more fully understand various anime styles and characters. AnimagineXL3.1 introduces a new set of special tags and aesthetics
